Kopte A, Kaup A (2025)
Publication Language: English
Publication Type: Conference contribution, Conference Contribution
Publication year: 2025
Conference Proceedings Title: Proceedings of the Picture Coding Symposium (PCS)
Open Access Link: https://arxiv.org/abs/2510.03926
To manage the complexity of transformers in video compression, local attention mechanisms are a practical necessity. The common approach of partitioning frames into patches, however, creates architectural flaws like irregular receptive fields. When adapted for temporal autoregressive models, this paradigm, exemplified by the Video Compression Transformer (VCT), also necessitates computationally redundant overlapping windows. This work introduces 3D Sliding Window Attention (SWA), a patchless form of local attention. By enabling a decoder-only architecture that unifies spatial and temporal context processing, and by providing a uniform receptive field, our method significantly improves rate-distortion performance, achieving Bjøntegaard Delta-rate savings of up to 18.6 % against the VCT baseline. Simultaneously, by eliminating the need for overlapping windows, our method reduces overall decoder complexity by a factor of 2.8, while its entropy model is nearly 3.5 times more efficient. We further analyze our model’s behavior and show that while it benefits from long-range temporal context, excessive context can degrade performance
APA:
Kopte, A., & Kaup, A. (2025). Sliding Window Attention for Learned Video Compression. In IEEE (Eds.), Proceedings of the Picture Coding Symposium (PCS). Aachen, DE.
MLA:
Kopte, Alexander, and André Kaup. "Sliding Window Attention for Learned Video Compression." Proceedings of the Picture Coding Symposium (PCS), Aachen Ed. IEEE, 2025.
BibTeX: Download