Boosting Neural Image Compression for Machines Using Latent Space Masking

Fischer K, Brand F, Kaup A (2022)


Publication Language: English

Publication Status: Accepted

Publication Type: Journal article

Future Publication Type: Journal article

Publication year: 2022

Journal

URI: https://arxiv.org/abs/2112.08168

DOI: 10.1109/TCSVT.2022.3195322

Open Access Link: https://arxiv.org/abs/2112.08168

Abstract

Today, many image coding scenarios do not have a human as final intended user, but rather a machine fulfilling computer vision tasks on the decoded image. Thereby, the primary goal is not to keep visual quality but maintain the task accuracy of the machine for a given bitrate. Due to the tremendous progress of deep neural networks setting benchmarking results, mostly neural networks are employed to solve the analysis tasks at the decoder side. Moreover, neural networks have also found their way into the field of image compression recently. These two developments allow for an end-to-end training of the neural compression network for an analysis network as information sink.

Therefore, we first roll out such a training with a task-specific loss to enhance the coding performance of neural compression networks. Compared to the standard VVC, 41.4% of bitrate are saved by this method for Mask R-CNN as analysis network on the uncompressed Cityscapes dataset. As a main contribution, we propose LSMnet, a network that runs in parallel to the encoder network and masks out elements of the latent space that are presumably not required for the analysis network. By this approach, additional 27.3% of bitrate are saved compared to the basic neural compression network optimized with the task loss.

In addition, we are the first to utilize a feature-based distortion in the training loss within the context of machine-to-machine communication, which allows for a training without annotated data. We provide extensive analyses on the Cityscapes dataset including cross-evaluation with different analysis networks and present exemplary visual results.

Inference code and pre-trained models are published at https://github.com/FAU-LMS/NCN_for_M2M.

Authors with CRIS profile

How to cite

APA:

Fischer, K., Brand, F., & Kaup, A. (2022). Boosting Neural Image Compression for Machines Using Latent Space Masking. IEEE Transactions on Circuits and Systems For Video Technology. https://doi.org/10.1109/TCSVT.2022.3195322

MLA:

Fischer, Kristian, Fabian Brand, and André Kaup. "Boosting Neural Image Compression for Machines Using Latent Space Masking." IEEE Transactions on Circuits and Systems For Video Technology (2022).

BibTeX: Download