Venator M, El Himer Y, Aklanoglu S, Bruns E, Maier A (2021)
Publication Language: English
Publication Type: Journal article, Original article
Publication year: 2021
URI: https://ieeexplore.ieee.org/document/9354898
Visual localization provides the basis for many robotics applications such as autonomous navigation or augmented reality. Especially in outdoor scenes, robust localization requires local features which can be reliably extracted and matched under changing conditions. Previous approaches have applied generative image-to-image translation models to align images in a single domain before correspondence search. In this paper, we invert the concept and elaborate why it is more promising to use image domain adaptation for training of robust local features. Integrating this idea into a self-supervised training framework, we show in various experiments covering image matching, visual localization, and scene reconstruction that our Domain-Invariant SuperPoint (DISP) outperforms existing self-supervised methods in terms of repeatability, generalization, and robustness. In contrast to competitive supervised local features, our modular and fully self-supervised approach can be easily adapted to different domains and localization tasks as it does not require ground truth correspondences for training.
APA:
Venator, M., El Himer, Y., Aklanoglu, S., Bruns, E., & Maier, A. (2021). Self-Supervised Learning of Domain-Invariant Local Features for Robust Visual Localization under Challenging Conditions. IEEE Robotics and Automation Letters. https://doi.org/10.1109/LRA.2021.3059571
MLA:
Venator, Moritz, et al. "Self-Supervised Learning of Domain-Invariant Local Features for Robust Visual Localization under Challenging Conditions." IEEE Robotics and Automation Letters (2021).
BibTeX: Download