vALID: Artificial-Intelligence-Driven Decision-Making in the Clinic. Ethical, Legal and Societal Challenges (vALID)

Third Party Funds Group - Overall project


Acronym: vALID

Start date : 01.11.2019

End date : 31.10.2022


Project details

Short description

AI is on everyone’s lips. Applications of AI are becoming increasingly relevant in the field of clinical decision-making. While many of the conceivable use cases of clinical AI still lay in the future, others have already begun to shape practice. The project vALID provides a normative, legal, and technical analysis of how AI-driven clinical Decisions Support Systems could be aligned with the ideal of clinician and patient sovereignty. It examines how concepts of trustworthiness, transparency, agency, and responsibility are affected and shifted by clinical AI—both on a theoretical level, and with regards to concrete moral and legal consequences. This analysis is grounded in an empirical case study which deploys mock-up simulations of AI-driven clinical Decision Support Systems and systematically gathers clinician and patient attitudes on a variety of designs and implementations. One key output of vALID will be a governance perspective on human-centric AI-driven Decision Support Systems in the context of shared clinical decision-making.

Scientific Abstract

AI is on everyone’s lips. Applications of AI are becoming increasingly relevant in the field of clinical decision-making. While many of the conceivable use cases still lay in the future, others have already begun to shape practice. The project vALID provides a normative, legal, and technical analysis of how AI-driven clinical Decisions Support Systems could be aligned with the ideal of clinician and patient sovereignty.

vALID consists of four subprojects. On the basis of an in-depth analysis of existing normative work on clinical AI, the ethical subproject examines which aspects the ideal of clinician and patient sovereignty encompasses. On the basis of a de lege lata analysis, the legal subproject analyzes and evaluates various regulatory options in the national and international context. Both subprojects reflect on how concepts of trustworthiness, transparency, agency and responsibility are influenced and shifted by clinical AI, both on a theoretical level and with regard to concrete moral and legal consequences.

In the technical subproject, and against the background of a thorough analysis of what is technically possible and already being practiced in the clinic, mock-up simulations of conventional, automated and integrative decision support systems will be developed. In the empirical subproject, clinicians and patients will be exposed to these mock-up simulations. Quantitative and qualitative methods will then be used to systematically gather perspectives and argumentative patterns on the range of designs and implementations of AI-driven, clinical decision support systems.

Throughout this process, the subprojects are continuously methodologically intertwined: The normative subprojects on the one hand develop the conceptual framework for the empirical investigations, and on the other hand incorporate results of the latter into their positions.

On the basis of this work, the four vALID subprojects will finally jointly develop an ethically, legally, technically, and empirically informed governance perspective for AI-driven decision support systems in the context of shared clinical decision-making.

Involved:

Contributing FAU Organisations:

Funding Source