Jin Y, Deligiannis A, Fuentes-Michel JC, Vossiek M (2023)
Publication Type: Journal article
Publication year: 2023
Pages Range: 1-15
With the rapid development of autonomous driving technology, radar sensors play a vital role in the perception system due to their robustness under harsh environmental conditions, exact range and velocity perception capability. However, the state-of-the-art performance of algorithms solely based on radar to achieve various perception tasks, such as classifying road users and infrastructures, still lags far behind expectation. Their failure can mainly be accounted for the extreme sparseness of radar point cloud for objects, low angular resolution, and the issue of ghost targets. In this work, we propose a novel network that employs the complex range-Doppler matrix as input to achieve radar-tailored panoptic segmentation (i.e., free-space segmentation and object detection ). Our network surpasses previous works in free-space segmentation and object detection tasks, and the improvement in the former task is especially notable. During training, the segmented camera image with radar customized adaption is utilized as the ground truth. Through such a cross-modal supervision method, the labeling expense is alleviated considerably. Based on it, we further design an innovative camera-radar system concept that is able to automatically train deep neural networks with radar measurement.
APA:
Jin, Y., Deligiannis, A., Fuentes-Michel, J.C., & Vossiek, M. (2023). Cross-Modal Supervision-Based Multitask Learning with Automotive Radar Raw Data. IEEE Transactions on Intelligent Vehicles, 1-15. https://doi.org/10.1109/TIV.2023.3234583
MLA:
Jin, Yi, et al. "Cross-Modal Supervision-Based Multitask Learning with Automotive Radar Raw Data." IEEE Transactions on Intelligent Vehicles (2023): 1-15.
BibTeX: Download