Cross-modal perceptual display: a new generation of audiovisual virtual environments (Crossmod)

Third Party Funds Group - Sub project


Acronym: Crossmod

Start date : 01.09.2006

End date : 31.12.2010


Overall project details

Overall project

Cross-modal perceptual display: a new generation of audiovisual virtual environments

Project details

Scientific Abstract

Virtual environments (VEs) play an increasingly important role in our society. Currently two main sensorial channels are exploited in VEs: visual and auditory, although sound is greatly underused. The ever-increasing scene complexity of VEs means that it is currently not possible to display highly realistic scenes in real time despite the availability of modern high-performance graphics and audio processors. However, the realism and quality of a virtual image/sound needs to be as good as what the user can perceive: we only need to display what is necessary.Despite recent research, little work exists on cross-modal effects, i.e., the effects that each channel (visual and auditory) has on the other, to improve the efficiency and quality of VEs. CROSSMOD will study these effects and develop a better understanding of how perceptual issues affect auditory/visual display; this understanding will lead to the development of novel algorithms for selectively rendering VEs. The cross-modal effects studied will potentially include the effect of spatial/latency congruence on quality perception, attention-control, sound-induced changes in visual perception and foveal/peripheral audiovisual effects. These initial experiments will be guided by their applicability to the improvement of VE display and authoring. An integrated cross-modal manager will be developed, using results of the initial experiments identifying which cross-modal effects are useful for VE display.The solutions developed by CROSSMOD will enable the display of perceptually highly realistic environments in real time even for very complex scenes, as well as the use of active cross-modal effects such as attention control in the display and authoring of VEs. We will evaluate our hypotheses with further experiments in realistic, complex VEs. Further evaluation will be performed on three target applications: computer games, design/architecture and clinical psychiatry, using platforms adapted to each application.

Involved:

Contributing FAU Organisations:

Funding Source

Research Areas