Cross-modal perceptual display: a new generation of audiovisual virtual environments

Third Party Funds Group - Sub project

Overall project details

Overall project: Cross-modal perceptual display: a new generation of audiovisual virtual environments


Project Details

Project leader:
Prof. Dr. Marc Stamminger


Contributing FAU Organisations:
Lehrstuhl für Informatik 9 (Graphische Datenverarbeitung)

Funding source: EU - 6. RP / Focusing and integrating Community research / Specific Targeted Research Projects (STREP)
Acronym: Crossmod
Start date: 01/09/2006
End date: 31/12/2010


Research Fields

Rendering and Visualization
Lehrstuhl für Informatik 9 (Graphische Datenverarbeitung)
Virtual, Mixed, and Augmented Reality
Lehrstuhl für Informatik 9 (Graphische Datenverarbeitung)


Abstract (technical / expert description):

Virtual environments (VEs) play an increasingly important role in our society. Currently two main sensorial channels are exploited in VEs: visual and auditory, although sound is greatly underused. The ever-increasing scene complexity of VEs means that it is currently not possible to display highly realistic scenes in real time despite the availability of modern high-performance graphics and audio processors. However, the realism and quality of a virtual image/sound needs to be as good as what the user can perceive: we only need to display what is necessary.Despite recent research, little work exists on cross-modal effects, i.e., the effects that each channel (visual and auditory) has on the other, to improve the efficiency and quality of VEs. CROSSMOD will study these effects and develop a better understanding of how perceptual issues affect auditory/visual display; this understanding will lead to the development of novel algorithms for selectively rendering VEs. The cross-modal effects studied will potentially include the effect of spatial/latency congruence on quality perception, attention-control, sound-induced changes in visual perception and foveal/peripheral audiovisual effects. These initial experiments will be guided by their applicability to the improvement of VE display and authoring. An integrated cross-modal manager will be developed, using results of the initial experiments identifying which cross-modal effects are useful for VE display.The solutions developed by CROSSMOD will enable the display of perceptually highly realistic environments in real time even for very complex scenes, as well as the use of active cross-modal effects such as attention control in the display and authoring of VEs. We will evaluate our hypotheses with further experiments in realistic, complex VEs. Further evaluation will be performed on three target applications: computer games, design/architecture and clinical psychiatry, using platforms adapted to each application.


External Partners

Institut national de Recherche en Informatique et en Automatique (INRIA)
National Center for Scientific Research / Centre national de la recherche scientifique (CNRS)
National Research Council (CNR)
Institut de Recherche et Coordination Acoustique/Musique (Ircam)
University of Bristol
Technische Universität Wien

Last updated on 2018-24-05 at 16:41