Schröter H, Rosenkranz T, Escalante-B AN, Maier A (2023)
Publication Type: Conference contribution, Conference Contribution
Publication year: 2023
Conference Proceedings Title: INTERSPEECH 2023
Event location: Dublin, Ireland
Open Access Link: https://arxiv.org/abs/2305.08227
Multi-frame algorithms for single-channel speech enhancement are able to take advantage from short-time correlations within the speech signal. Deep Filtering (DF) was proposed to directly estimate a complex filter in frequency domain to take advantage of these correlations. In this work, we present a real-time speech enhancement demo using DeepFilterNet. DeepFilterNet's efficiency is enabled by exploiting domain knowledge of speech production and psychoacoustic perception. Our model is able to match state-of-the-art speech enhancement benchmarks while achieving a real-time-factor of 0.19 on a single threaded notebook CPU. The framework as well as pretrained weights have been published under an open source license.
APA:
Schröter, H., Rosenkranz, T., Escalante-B, A.N., & Maier, A. (2023). DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement. In INTERSPEECH 2023. Dublin, Ireland.
MLA:
Schröter, Hendrik, et al. "DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement." Proceedings of the INTERSPEECH, Dublin, Ireland 2023.
BibTeX: Download