Braun M, Bleher H, Hummel P (2021)
Publication Type: Journal article, Original article
Publication year: 2021
Book Volume: 51
Pages Range: 1-6
URI: https://onlinelibrary.wiley.com/doi/10.1002/hast.1207
DOI: 10.1002/hast.1207
Open Access Link: https://onlinelibrary.wiley.com/doi/10.1002/hast.1207
Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High-Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust in AI that does not conceive of trust merely as an accelerator for societal acceptance of AI technologies. Instead, we argue, trust is granted through leaps of faith. For this reason, trust remains precarious, fragile, and resistant to promotion through formulaic approaches. We further highlight the significance of distrust in societal deliberation, which is relevant to trust in various and intricate ways. Among the fruitful aspects of distrust is that it enables individuals to forgo technology if desired, to constrain its power, and to exercise meaningful human control.
APA:
Braun, M., Bleher, H., & Hummel, P. (2021). A Leap of Faith: Is There a Formula for “Trustworthy” AI? Hastings Center Report, 51, 1-6. https://dx.doi.org/10.1002/hast.1207
MLA:
Braun, Matthias, Hannah Bleher, and Patrik Hummel. "A Leap of Faith: Is There a Formula for “Trustworthy” AI?" Hastings Center Report 51 (2021): 1-6.
BibTeX: Download