Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion

Jaganathan S, Kukla M, Wang J, Shetty K, Maier A (2023)


Publication Type: Conference contribution

Publication year: 2023

DOI: 10.1109/WACV56688.2023.00281

Abstract

Deep Learning-based 2D/3D registration enables fast, robust, and accurate X-ray to CT image fusion when large annotated paired datasets are available for training. However, the need for paired CT volume and X-ray images with ground truth registration limits the applicability in interventional scenarios. An alternative is to use simulated X-ray projections from CT volumes, thus removing the need for paired annotated datasets. Deep Neural Networks trained exclusively on simulated X-ray projections can perform significantly worse on real X-ray images due to the domain gap. We propose a self-supervised 2D/3D registration framework combining simulated training with unsupervised feature and pixel space domain adaptation to overcome the domain gap and eliminate the need for paired annotated datasets. Our framework achieves a registration accuracy of 1.83 ± 1.16 mm with a high success ratio of 90.1% on real X-ray images showing a 23.9% increase in success ratio compared to reference annotation-free algorithms.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Jaganathan, S., Kukla, M., Wang, J., Shetty, K., & Maier, A. (2023). Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion.

MLA:

Jaganathan, Srikrishna, et al. "Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion." 2023.

BibTeX: Download