Medical Image Analysis with Normative Machine Learning (ERC-CoG MIA-NORMAL)

Third party funded individual grant


Acronym: ERC-CoG MIA-NORMAL

Start date : 01.09.2023

End date : 30.09.2028


Project details

Scientific Abstract

As one of the most important aspects of diagnosis, treatment planning, treatment delivery, and follow-up, medical imaging provides an unmatched ability to identify disease with high accuracy. As a result of its success, referrals for imaging examinations have increased significantly. However, medical imaging depends on interpretation by highly specialised clinical experts and is thus rarely available at the front-line-of-care, for patient triage, or for frequent follow-ups. Very often, excluding certain conditions or confirming physiological normality would be essential at many stages of the patient journey, to streamline referrals and relieve pressure on human experts who have limited capacity. Hence, there is a strong need for increased imaging with automated diagnostic support for clinicians, healthcare professionals, and caregivers.

Machine learning is expected to be an algorithmic panacea for diagnostic automation. However, despite significant advances such as Deep Learning with notable impact on real-world applications, robust confirmation of normality is still an unsolved problem, which cannot be addressed with established approaches.

Like clinical experts, machines should also be able to verify the absence of pathology by contrasting new images with their knowledge about healthy anatomy and expected physiological variability. Thus, the aim of this proposal is to develop normative representation learning as a new machine learning paradigm for medical imaging, providing patient-specific computational tools for robust confirmation of normality, image quality control, health screening, and prevention of disease before onset. We will do this by developing novel Deep Learning approaches that can learn without manual labels from healthy patient data only, applicable to cross-sectional, sequential, and multi-modal data. Resulting models will be able to extract clinically useful and actionable information as early and frequent as possible during patient journeys.

Involved:

Contributing FAU Organisations:

Funding Source