Robust decoding of the speech envelope from EEG recordings through deep neural networks

Thornton M, Mandic D, Reichenbach T (2022)


Publication Type: Journal article

Publication year: 2022

Journal

Book Volume: 19

Journal Issue: 4

DOI: 10.1088/1741-2552/ac7976

Abstract

Objective. Smart hearing aids which can decode the focus of a user's attention could considerably improve comprehension levels in noisy environments. Methods for decoding auditory attention from electroencapholography (EEG) have attracted considerable interest for this reason. Recent studies suggest that the integration of deep neural networks (DNNs) into existing auditory attention decoding (AAD) algorithms is highly beneficial, although it remains unclear whether these enhanced algorithms can perform robustly in different real-world scenarios. Therefore, we sought to characterise the performance of DNNs at reconstructing the envelope of an attended speech stream from EEG recordings in different listening conditions. In addition, given the relatively sparse availability of EEG data, we investigate possibility of applying subject-independent algorithms to EEG recorded from unseen individuals. Approach. Both linear models and nonlinear DNNs were employed to decode the envelope of clean speech from EEG recordings, with and without subject-specific information. The mean behaviour, as well as the variability of the reconstruction, was characterised for each model. We then trained subject-specific linear models and DNNs to reconstruct the envelope of speech in clean and noisy conditions, and investigated how well they performed in different listening scenarios. We also established that these models can be used to decode auditory attention in competing-speaker scenarios. Main results. The DNNs offered a considerable advantage over their linear analogue at reconstructing the envelope of clean speech. This advantage persisted even when subject-specific information was unavailable at the time of training. The same DNN architectures generalised to a distinct dataset, which contained EEG recorded under a variety of listening conditions. In competing-speakers and speech-in-noise conditions, the DNNs significantly outperformed the linear models. Finally, the DNNs offered a considerable improvement over the linear approach at decoding auditory attention in competing-speakers scenarios. Significance. We present the first detailed study into the extent to which DNNs can be employed for reconstructing the envelope of an attended speech stream. We conclusively demonstrate that DNNs improve the reconstruction of the attended speech envelope. The variance of the reconstruction error is shown to be similar for both DNNs and the linear model. DNNs therefore show promise for real-world AAD, since they perform well in multiple listening conditions and generalise to data recorded from unseen participants.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Thornton, M., Mandic, D., & Reichenbach, T. (2022). Robust decoding of the speech envelope from EEG recordings through deep neural networks. Journal of Neural Engineering, 19(4). https://dx.doi.org/10.1088/1741-2552/ac7976

MLA:

Thornton, Mike, Danilo Mandic, and Tobias Reichenbach. "Robust decoding of the speech envelope from EEG recordings through deep neural networks." Journal of Neural Engineering 19.4 (2022).

BibTeX: Download