Multi-channel spectrograms for speech processing applications using deep learning methods

Arias-Vergara T, Klumpp P, Vasquez-Correa JC, Nöth E, Orozco-Arroyave JR, Schuster M (2020)


Publication Type: Journal article

Publication year: 2020

Journal

DOI: 10.1007/s10044-020-00921-5

Abstract

Time–frequency representations of the speech signals provide dynamic information about how the frequency component changes with time. In order to process this information, deep learning models with convolution layers can be used to obtain feature maps. In many speech processing applications, the time–frequency representations are obtained by applying the short-time Fourier transform and using single-channel input tensors to feed the models. However, this may limit the potential of convolutional networks to learn different representations of the audio signal. In this paper, we propose a methodology to combine three different time–frequency representations of the signals by computing continuous wavelet transform, Mel-spectrograms, and Gammatone spectrograms and combining then into 3D-channel spectrograms to analyze speech in two different applications: (1) automatic detection of speech deficits in cochlear implant users and (2) phoneme class recognition to extract phone-attribute features. For this, two different deep learning-based models are considered: convolutional neural networks and recurrent neural networks with convolution layers.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Arias-Vergara, T., Klumpp, P., Vasquez-Correa, J.C., Nöth, E., Orozco-Arroyave, J.R., & Schuster, M. (2020). Multi-channel spectrograms for speech processing applications using deep learning methods. Pattern Analysis and Applications. https://doi.org/10.1007/s10044-020-00921-5

MLA:

Arias-Vergara, T., et al. "Multi-channel spectrograms for speech processing applications using deep learning methods." Pattern Analysis and Applications (2020).

BibTeX: Download