Mapping sounds onto images using binaural spectrograms

Deleforge A, Drouard V, Girin L, Horaud R (2014)


Publication Language: English

Publication Type: Conference contribution

Publication year: 2014

Pages Range: 2470-2474

Event location: Lisbon PT

ISBN: 978-0-9928-6261-9

Abstract

We propose a novel method for mapping sound spectrograms onto images and thus enabling alignment between auditory and visual features for subsequent multimodal processing. We suggest a supervised learning approach to this audio-visual fusion problem, on the following grounds. Firstly, we use a Gaussian mixture of locally-linear regressions to learn a mapping from image locations to binaural spectrograms. Secondly, we derive a closed-form expression for the conditional posterior probability of an image location, given both an observed spectrogram, emitted from an unknown source direction, and the mapping parameters that were previously learnt. Prominently, the proposed method is able to deal with completely different spectrograms for training and for alignment. While fixed-length wide-spectrum sounds are used for learning, thus fully and robustly estimating the regression, variable-length sparse-spectrum sounds, e.g., speech, are used for alignment. The proposed method successfully extracts the image location of speech utterances in realistic reverberant-room scenarios.

Authors with CRIS profile

How to cite

APA:

Deleforge, A., Drouard, V., Girin, L., & Horaud, R. (2014). Mapping sounds onto images using binaural spectrograms. In Proceedings of the 22nd European Signal Processing Conference (EUSIPCO) (pp. 2470-2474). Lisbon, PT.

MLA:

Deleforge, Antoine, et al. "Mapping sounds onto images using binaural spectrograms." Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), Lisbon 2014. 2470-2474.

BibTeX: Download