Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views

Bier B, Goldmann F, Zaech JN, Fotouhi J, Hegeman R, Grupp R, Armand M, Osgood G, Navab N, Maier A, Unberath M (2019)


Publication Type: Journal article

Publication year: 2019

Journal

DOI: 10.1007/s11548-019-01975-5

Abstract

Purpose: Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. Methods: In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120 × 90 . Results: On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. Conclusion: We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Bier, B., Goldmann, F., Zaech, J.N., Fotouhi, J., Hegeman, R., Grupp, R.,... Unberath, M. (2019). Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views. International Journal of Computer Assisted Radiology and Surgery. https://dx.doi.org/10.1007/s11548-019-01975-5

MLA:

Bier, Bastian, et al. "Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views." International Journal of Computer Assisted Radiology and Surgery (2019).

BibTeX: Download