From patches to objects: exploiting spatial reasoning for better visual representations

Albert T, Eskofier B, Zanca D (2024)


Publication Language: English

Publication Type: Journal article

Publication year: 2024

Journal

Book Volume: 6

Article Number: 232

Journal Issue: 5

DOI: 10.1007/s42452-024-05894-2

Abstract

As the field of deep learning steadily transitions from the realm of academic research to practical application, the significance of self-supervised pretraining methods has become increasingly prominent. These methods, particularly in the image domain, offer a compelling strategy to effectively utilize the abundance of unlabeled image data, thereby enhancing downstream tasks’ performance. In this paper, we propose Spatial Reasoning, a novel auxiliary pretraining method that takes advantage of a more flexible formulation of contrastive learning by introducing spatial reasoning as an auxiliary task for discriminative self-supervised methods. Spatial Reasoning works by having the network predict the relative distances between sampled non-overlapping patches. We argue that this forces the network to learn more detailed and intricate internal representations of the objects and the relationships between their constituting parts. Our experiments demonstrate substantial improvement in downstream performance in linear evaluation compared to similar work and provide directions for further research into spatial reasoning.

Authors with CRIS profile

How to cite

APA:

Albert, T., Eskofier, B., & Zanca, D. (2024). From patches to objects: exploiting spatial reasoning for better visual representations. Discover Applied Sciences, 6(5). https://doi.org/10.1007/s42452-024-05894-2

MLA:

Albert, Toni, Björn Eskofier, and Dario Zanca. "From patches to objects: exploiting spatial reasoning for better visual representations." Discover Applied Sciences 6.5 (2024).

BibTeX: Download