Towards a Better Understanding of Evaluating Trustworthiness in AI Systems

Kemmerzell N, Schreiner A, Khalid H, Schalk M, Bordoli L (2025)


Publication Language: English

Publication Type: Journal article

Publication year: 2025

Journal

Book Volume: 57

Article Number: 218

Journal Issue: 9

DOI: 10.1145/3721976

Abstract

With the increasing integration of artificial intelligence into various applications across industries, numerous institutions are striving to establish requirements for AI systems to be considered trustworthy, such as fairness, privacy, robustness, or transparency. For the implementation of Trustworthy AI into real-world applications, these requirements need to be operationalized, which includes evaluating the extent to which these criteria are fulfilled. This survey contributes to the discourse by outlining the current understanding of trustworthiness and its evaluation. Initially, existing evaluation frameworks are analyzed, from which common dimensions of trustworthiness are derived. For each dimension, the literature is surveyed for evaluation strategies, specifically focusing on quantitative metrics. By mapping these strategies to the machine learning lifecycle, an evaluation framework is derived, which can serve as a foundation towards the operationalization of Trustworthy AI.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Kemmerzell, N., Schreiner, A., Khalid, H., Schalk, M., & Bordoli, L. (2025). Towards a Better Understanding of Evaluating Trustworthiness in AI Systems. ACM Computing Surveys, 57(9). https://doi.org/10.1145/3721976

MLA:

Kemmerzell, Nils, et al. "Towards a Better Understanding of Evaluating Trustworthiness in AI Systems." ACM Computing Surveys 57.9 (2025).

BibTeX: Download