Reitz P, Veihelmann TC, Bönsch J, Franchi N, Lübke M (2026)
Publication Language: English
Publication Type: Journal article, Letter
Publication year: 2026
URI: https://ieeexplore.ieee.org/document/11328785
DOI: 10.1109/LSENS.2025.3650621
Low resolution, sparse reflections, and environmental noise limit the reliability of radar-based object detection. This letter presents a YOLO-inspired deep learning model with dual-radar fusion to enhance detection robustness. Range-Doppler maps from two static 60 GHz FMCW radars are processed using a dual-backbone architecture with CBAM-based attention and a lightweight dynamic weighting module. The system monitors moving humans in a parking garage. At the best operating point, the proposed fusion improves the F1-score from 0.944 (single radar) to 0.962, with precision/recall increasing from 0.930/0.959 to 0.953/0.972. At matched recall (≈0.967), the false positive rate decreases from 0.070 to 0.031, corresponding to a reduction of about 55%. Real-time performance is maintained with inference speeds above 100 FPS on a desktop CPU. These results demonstrate that dual-radar feature fusion enables accurate and efficient radar perception in cluttered environments.
APA:
Reitz, P., Veihelmann, T.C., Bönsch, J., Franchi, N., & Lübke, M. (2026). Deep Learning-Based Multi-Radar Fusion for Robust Real-Time Object Detection. IEEE Sensors Letters. https://doi.org/10.1109/LSENS.2025.3650621
MLA:
Reitz, Philipp, et al. "Deep Learning-Based Multi-Radar Fusion for Robust Real-Time Object Detection." IEEE Sensors Letters (2026).
BibTeX: Download