Tretter M, Samhammer D (2023)
Publication Language: English
Publication Type: Journal article, Original article
Publication year: 2023
In their article Ethics of the algorithmic prediction of goal of care preferences Ferrario, Floeckler, and Biller-Andorno pose the question of whether AI preference prediction systems should be made available to next of kin or to clinicians only. While the authors advocate that “access should be provided to both clinicians and loved ones with due explanations and as desired”, we would disagree with this assessment. Using an everyday scene, we show that people are multifaceted personalities—and that, in order to avoid biased or incorrect assessments, it is important to consider this multifacetedness when thinking about their preferences. However, data generally paint a fairly clear picture of people, and AI systems also struggle with ambiguity. With this in mind, we conclude that it is an important function of next of kin to bring additional perspectives to the preference-finding and decision-making process and to ensure that the patient is perceived as a multifaceted personality. However, as the authors themselves acknowledge, when surrogates come into contact with AI systems for predicting patient preferences, there is a risk that they may lose confidence in their own judgments and simply follow the recommendation of the AI. To prevent this “conformity” and ensure that relatives can pursue their role as “advocates” for patients’ multifacetedness, we argue that it is best to restrict next of kin from using such AIs.
APA:
Tretter, M., & Samhammer, D. (2023). For the sake of multifacetedness. Why artificial intelligence patient preference prediction systems shouldn’t be for next of kin. Journal of Medical Ethics. https://doi.org/10.1136/jme-2022-108775
MLA:
Tretter, Max, and David Samhammer. "For the sake of multifacetedness. Why artificial intelligence patient preference prediction systems shouldn’t be for next of kin." Journal of Medical Ethics (2023).
BibTeX: Download