Windsheimer M, Brand F, Kaup A (2024)
Publication Type: Conference contribution, Conference Contribution
Publication year: 2024
ISBN: 979-8-3503-4485-1
URI: https://ieeexplore.ieee.org/document/10446147
DOI: 10.1109/ICASSP48485.2024.10446147
Open Access Link: https://arxiv.org/abs/2305.05451
Most learning-based image compression methods lack efficiency for high image quality due to their non-invertible design. The decoding function of the frequently applied compressive autoencoder architecture is only an approximated inverse of the encoding transform. This issue can be resolved by using invertible latent variable models, which allow a perfect reconstruction if no quantization is performed. Furthermore, many traditional image and video coders apply dynamic block partitioning to vary the compression of certain image regions depending on their content. Inspired by this approach, hierarchical latent spaces have been applied to learning-based compression networks. In this paper, we present a novel concept, which adapts the hierarchical latent space for augmented normalizing flows, an invertible latent variable model. Our best performing model achieves significant rate savings of more than 7% over comparable single-scale models.
APA:
Windsheimer, M., Brand, F., & Kaup, A. (2024). Multiscale Augmented Normalizing Flows for Image Compression. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing. Seoul, KR.
MLA:
Windsheimer, Marc, Fabian Brand, and André Kaup. "Multiscale Augmented Normalizing Flows for Image Compression." Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Seoul 2024.
BibTeX: Download