Deep Reinforcement Learning for the Navigation of Neurovascular Catheters

Behr T, Pusch TP, Siegfarth M, Hüsener D, Mörschel T, Karstensen L (2019)


Publication Type: Journal article

Publication year: 2019

Journal

Book Volume: 5

Pages Range: 5-8

Journal Issue: 1

DOI: 10.1515/cdbme-2019-0002

Abstract

Endovascular catheters are necessary for state-of-the-art treatments of life-threatening and time-critical diseases like strokes and heart attacks. Navigating them through the vascular tree is a highly challenging task. We present our preliminary results for the autonomous control of a guidewire through a vessel phantom with the help of Deep Reinforcement Learning. We trained Deep-Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) agents on a simulated vessel phantom and evaluated the training performance. We also investigated the effect of the two enhancements Hindsight Experience Replay (HER) and Human Demonstration (HD) on the training speed of our agents. The results show that the agents are capable of learning to navigate a guidewire from a random start point in the vessel phantom to a random goal. This is achieved with an average success rate of 86.5% for DQN and 89.6% for DDPG. The use of HER and HD significantly increases the training speed. The results are promising and future research should address more complex vessel phantoms and the use of a combination of guidewire and catheter.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Behr, T., Pusch, T.P., Siegfarth, M., Hüsener, D., Mörschel, T., & Karstensen, L. (2019). Deep Reinforcement Learning for the Navigation of Neurovascular Catheters. Current Directions in Biomedical Engineering, 5(1), 5-8. https://doi.org/10.1515/cdbme-2019-0002

MLA:

Behr, Tobias, et al. "Deep Reinforcement Learning for the Navigation of Neurovascular Catheters." Current Directions in Biomedical Engineering 5.1 (2019): 5-8.

BibTeX: Download