Time-Frequency Masking Based Online Multi-Channel Speech Enhancement with Convolutional Recurrent Neural Networks

Chakrabarty S, Habets E (2019)


Publication Type: Journal article

Publication year: 2019

Journal

Book Volume: 13

Pages Range: 787-799

Article Number: 8691791

Journal Issue: 4

DOI: 10.1109/JSTSP.2019.2911401

Abstract

This paper presents a time-frequency masking based online multi-channel speech enhancement approach that uses a convolutional recurrent neural network to estimate the mask. The magnitude and phase components of the short-time Fourier transform coefficients for multiple time frames are provided as an input such that the network is able to discriminate between the directional speech and the noise components based on the spatial characteristics of the individual signals as well as their spectro-temporal structure. The estimation of two different masks, namely, ideal ratio mask (IRM) and ideal binary mask (IBM), along with two different approaches for incorporating the mask to obtain the desired signal are discussed. In the first approach, the mask is directly applied as a real valued gain to a reference microphone signal, whereas in the second approach, the masks are used as an activity indicator for the recursive update of power spectral density (PSD) matrices to be used within a beamformer. The performance of the proposed system with the two different estimated masks utilized within the two different enhancement approaches is evaluated with both simulated as well as measured room impulse responses, where it is shown that the IBM is better suited as an indicator for the PSD updates while direct application of IRM as a real valued gain leads to a better improvement in terms of short term objective intelligibility. Analysis of the performance of the proposed system also demonstrates the robustness of the system to different angular positions of the speech source.

Authors with CRIS profile

How to cite

APA:

Chakrabarty, S., & Habets, E. (2019). Time-Frequency Masking Based Online Multi-Channel Speech Enhancement with Convolutional Recurrent Neural Networks. IEEE Journal of Selected Topics in Signal Processing, 13(4), 787-799. https://dx.doi.org/10.1109/JSTSP.2019.2911401

MLA:

Chakrabarty, Soumitro, and Emanuël Habets. "Time-Frequency Masking Based Online Multi-Channel Speech Enhancement with Convolutional Recurrent Neural Networks." IEEE Journal of Selected Topics in Signal Processing 13.4 (2019): 787-799.

BibTeX: Download