Anemüller C, Adami A, Herre J (2023)
Publication Language: English
Publication Type: Journal article, Original article
Publication year: 2023
Original Authors: Carlotta Anemüller, Alexander Adami, Jürgen Herre
Book Volume: 71
Pages Range: 281-292
Issue: 5
Journal Issue: 5
In virtual/augmented reality or 3D applications with binaural audio, it is often desired to render sound sources with a certain spatial extent in a realistic way. A common approach is to distribute multiple correlated or decorrelated point sources over the desired spatial extent range, possibly derived from the original source signal by applying suitable decorrelation filters. Based on this basic model, a novel method for efficient and realistic binaural rendering of spatially extended sound sources is proposed. Instead of rendering each point source individually, the target auditory cues are synthesized directly from just two decorrelated input signals. This procedure comes with the advantage of low computational complexity and relaxed requirements for decorrelation filters. An objective evaluation shows that the proposed method matches the basic rendering model well in terms of perceptually relevant objective metrics. A subjective listening test shows, furthermore, that the output of the proposed method is perceptually almost identical to the output of the basic rendering model. The technique is part of the Reference Model architecture of the upcoming MPEG-I Immersive Audio standard.
APA:
Anemüller, C., Adami, A., & Herre, J. (2023). Efficient Binaural Rendering of Spatially Extended Sound Sources. Journal of the Audio Engineering Society, 71(5), 281-292. https://doi.org/10.17743/jaes.2022.0069
MLA:
Anemüller, Carlotta, Alexander Adami, and Jürgen Herre. "Efficient Binaural Rendering of Spatially Extended Sound Sources." Journal of the Audio Engineering Society 71.5 (2023): 281-292.
BibTeX: Download