Jin Y, Kuang Y, Hoffmann M, Schüßler C, Deligiannis A, Fuentes-Michel JC, Vossiek M (2023)
Publication Type: Journal article, Original article
Publication year: 2023
Book Volume: 23
Pages Range: 25587-25600
Journal Issue: 20
DOI: 10.1109/JSEN.2023.3313093
This work proposes a novel sensor fusion-based, single-frame, multiclass object detection method for road users, including vehicles, pedestrians, and cyclists, in which a deep fusion occurs between the lidar point cloud (PC) and the corresponding Doppler contexts, namely, the Doppler features, from the radar cube. Based on convolutional neural networks (CNNs), the method consists of two stages: In the first stage, region proposals are generated from the voxelized lidar PC, and relying on these proposals, Doppler contexts are cropped from the radar cube. In the second stage, using fused features from the lidar and radar, the method achieves object detection and object motion status classification tasks. When evaluated with measurements in inclement conditions, which are generated by a foggification model from real-life measurements, in terms of the intersection over union (IoU) metric, the proposed method outperforms the lidar-based network by a large margin for vulnerable road users, namely, 4.5% and 6.1% improvement for pedestrians and cyclists, respectively. In addition, it achieves 87% F1 score (81.6% precision and 93.1% recall) for single-frame, object motion status classification.
APA:
Jin, Y., Kuang, Y., Hoffmann, M., Schüßler, C., Deligiannis, A., Fuentes-Michel, J.-C., & Vossiek, M. (2023). Radar and Lidar Deep Fusion: Providing Doppler Contexts to Time-of-Flight Lidar. IEEE Sensors Journal, 23(20), 25587-25600. https://doi.org/10.1109/JSEN.2023.3313093
MLA:
Jin, Yi, et al. "Radar and Lidar Deep Fusion: Providing Doppler Contexts to Time-of-Flight Lidar." IEEE Sensors Journal 23.20 (2023): 25587-25600.
BibTeX: Download