Deep Multi-Frame Filtering for Hearing Aids

Schröter H, Rosenkranz T, Escalante-B AN, Maier A (2023)


Publication Type: Conference contribution, Conference Contribution

Publication year: 2023

Conference Proceedings Title: INTERSPEECH 2023

Event location: Dublin, Ireland

Open Access Link: https://arxiv.org/abs/2305.08225

Abstract

Multi-frame algorithms for single-channel speech enhancement are able to take advantage from short-time correlations within the speech signal. Deep filtering (DF) recently demonstrated its capabilities for low-latency scenarios like hearing aids with its complex multi-frame (MF) filter. Alternatively, the complex filter can be estimated via an MF minimum variance distortionless response (MVDR), or MF Wiener filter (WF). Previous studies have shown that incorporating algorithm domain knowledge using an MVDR filter might be beneficial compared to the direct filter estimation via DF. In this work, we compare the usage of various multi-frame filters such as DF, MF-MVDR, or MF-WF for HAs. We assess different covariance estimation methods for both MF-MVDR and MF-WF and objectively demonstrate an improved performance compared to direct DF estimation, significantly outperforming related work while improving the runtime performance.

Authors with CRIS profile

Related research project(s)

Involved external institutions

How to cite

APA:

Schröter, H., Rosenkranz, T., Escalante-B, A.N., & Maier, A. (2023). Deep Multi-Frame Filtering for Hearing Aids. In INTERSPEECH 2023. Dublin, Ireland.

MLA:

Schröter, Hendrik, et al. "Deep Multi-Frame Filtering for Hearing Aids." Proceedings of the INTERSPEECH, Dublin, Ireland 2023.

BibTeX: Download