The interplay of multisensory causal inference and attention in audiovisual perception (MultiAttend)

Third party funded individual grant


Acronym: MultiAttend

Start date : 01.11.2023


Project details

Scientific Abstract

Humans are constantly bombarded with multisensory signals that our brain binds coherently across the sensory channels to create a multisensory perception of the environment, for example the voices and faces of multiple speakers on a busy party. Yet, in a complex environment with excessive streams of multisensory signals, how does the brain infer which objects caused the signals given its limited attentional capacities? The brain has to solve two interrelated problems: First, it needs to infer the causal structure of the multisensory signals to integrate them proportional to their sensory reliability in case of a common or to segregate them in case of independent causes. Second, the brain needs to use selective attention to bias the attentional competition between multisensory signals to focus selectively on relevant signals and to avoid a perceptual overload. Yet, the brain cannot solve these two tasks in isolation, but has to use its limited attentional resources to selectively infer the causal structure of relevant multisensory stimuli. Thus, the current project investigates the interplay of multisensory causal inference and attention in audiovisual perception. Specifically, we characterize how spatial attention, attentional resources and attentional competition between audiovisual signals modulate inferences on the signals’ causal structure. Attention could interact with causal inferences by modulating the a-priori causal assumption, the sensory reliabilities and/or the combination of audiovisual with unisensory signal estimates.

To investigate the interplay of multisensory causal inference and attention in audiovisual perception, the project will combine audiovisual psychophysical experiments with neurophysiological EEG measurements in healthy human participants in three work packages (WPs). In the first WP, we will characterize how the brain reorients endogenous and exogenous selective visual spatial attention to influence causal inferences on the structure of audiovisual spatial stimuli. In the second WP, we explore how the availability of attentional resources modulates the brain’s causal inferences on audiovisual spatial stimuli using a dual-task design. In the third WP, we investigate how the brain performs causal inferences on a numeric audiovisual target stimulus that competes for attentional resources with a synchronous numeric audiovisual distractor stimulus. The project’s results will improve our understanding of how human multisensorily perceive attentionally demanding complex environments. Fundamentally, the results may extent current optimal models of Bayesian multisensory causal inference to account for attentional processes. For applications, the results may be important to design high-quality audiovisual media (e.g., virtual environments) and user-friendly human-machine interfaces (e.g., vehicle cockpits) according to the brain’s perceptual and attentional feats.


Involved:

Contributing FAU Organisations:

Funding Source