Saliency-Driven Versatile Video Coding for Neural Object Detection

Fischer K, Fleckenstein F, Herglotz C, Kaup A (2021)


Publication Type: Conference contribution, Conference Contribution

Publication year: 2021

Event location: Toronto (Virtual Conference) AE

URI: https://arxiv.org/abs/2203.05944

DOI: 10.1109/ICASSP39728.2021.9415048

Open Access Link: https://arxiv.org/abs/2203.05944

Abstract

Saliency-driven image and video coding for humans has gained importance in the recent past. In this paper, we propose such a saliency-driven coding framework for the video coding for machines task using the latest video coding standard Versatile Video Coding (VVC). To determine the salient regions before encoding, we employ the real-time-capable object detection network You Only Look Once (YOLO) in combination with a novel decision criterion. To measure the coding quality for a machine, the state-of-the-art object segmentation network Mask R-CNN was applied to the decoded frame. From extensive simulations we find that, compared to the reference VVC with a constant quality, up to 29 % of bitrate can be saved with the same detection accuracy at the decoder side by applying the proposed saliency-driven framework. Besides, we compare YOLO against other, more traditional saliency detection methods.

Authors with CRIS profile

How to cite

APA:

Fischer, K., Fleckenstein, F., Herglotz, C., & Kaup, A. (2021). Saliency-Driven Versatile Video Coding for Neural Object Detection. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto (Virtual Conference), AE.

MLA:

Fischer, Kristian, et al. "Saliency-Driven Versatile Video Coding for Neural Object Detection." Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto (Virtual Conference) 2021.

BibTeX: Download