Robust Deep Neural Object Detection and Segmentation for Automotive Driving Scenario with Compressed Image Data

Fischer K, Blum C, Herglotz C, Kaup A (2021)


Publication Language: English

Publication Type: Conference contribution, Conference Contribution

Publication year: 2021

Event location: Daegu (Virtual Conference) KR

URI: https://arxiv.org/abs/2205.06501

DOI: 10.1109/ISCAS51556.2021.9401621

Open Access Link: https://arxiv.org/abs/2205.06501

Abstract

Deep neural object detection or segmentation networks are commonly trained with pristine, uncompressed data. However, in practical applications the input images are usually deteriorated by compression that is applied to efficiently transmit the data. Thus, we propose to add deteriorated images to the training process in order to increase the robustness of the two state-of-the-art networks Faster and Mask R-CNN. Throughout our paper, we investigate an autonomous driving scenario by evaluating the newly trained models on the Cityscapes dataset that has been compressed with the upcoming video coding standard Versatile Video Coding (VVC). When employing the models that have been trained with the proposed method, the weighted average precision of the R-CNNs can be increased by up to 3.68 percentage points for compressed input images, which corresponds to bitrate savings of nearly 48 %.

Authors with CRIS profile

How to cite

APA:

Fischer, K., Blum, C., Herglotz, C., & Kaup, A. (2021). Robust Deep Neural Object Detection and Segmentation for Automotive Driving Scenario with Compressed Image Data. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS). Daegu (Virtual Conference), KR.

MLA:

Fischer, Kristian, et al. "Robust Deep Neural Object Detection and Segmentation for Automotive Driving Scenario with Compressed Image Data." Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Daegu (Virtual Conference) 2021.

BibTeX: Download