Lessons Learned from Assessing Trustworthy AI in Practice

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Lessons Learned from Assessing Trustworthy AI in Practice. / Vetter, Dennis; Amann, Julia; Bruneault, Frédérick; Coffee, Megan; Düdder, Boris; Gallucci, Alessio; Gilbert, Thomas Krendl; Hagendorff, Thilo; Van Halem, Irmhild; Hickman, Eleanore; Hildt, Elisabeth; Holm, Sune; Kararigas, Georgios; Kringen, Pedro; Madai, Vince I.; Wiinblad Mathez, Emilie; Tithi, Jesmin Jahan; Westerlund, Magnus; Wurth, Renee; Zicari, Roberto V.

I: Digital Society, Bind 2, Nr. 3, 35, 2023.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Vetter, D, Amann, J, Bruneault, F, Coffee, M, Düdder, B, Gallucci, A, Gilbert, TK, Hagendorff, T, Van Halem, I, Hickman, E, Hildt, E, Holm, S, Kararigas, G, Kringen, P, Madai, VI, Wiinblad Mathez, E, Tithi, JJ, Westerlund, M, Wurth, R & Zicari, RV 2023, 'Lessons Learned from Assessing Trustworthy AI in Practice', Digital Society, bind 2, nr. 3, 35. https://doi.org/10.1007/s44206-023-00063-1

APA

Vetter, D., Amann, J., Bruneault, F., Coffee, M., Düdder, B., Gallucci, A., Gilbert, T. K., Hagendorff, T., Van Halem, I., Hickman, E., Hildt, E., Holm, S., Kararigas, G., Kringen, P., Madai, V. I., Wiinblad Mathez, E., Tithi, J. J., Westerlund, M., Wurth, R., & Zicari, R. V. (2023). Lessons Learned from Assessing Trustworthy AI in Practice. Digital Society, 2(3), [35]. https://doi.org/10.1007/s44206-023-00063-1

Vancouver

Vetter D, Amann J, Bruneault F, Coffee M, Düdder B, Gallucci A o.a. Lessons Learned from Assessing Trustworthy AI in Practice. Digital Society. 2023;2(3). 35. https://doi.org/10.1007/s44206-023-00063-1

Author

Vetter, Dennis ; Amann, Julia ; Bruneault, Frédérick ; Coffee, Megan ; Düdder, Boris ; Gallucci, Alessio ; Gilbert, Thomas Krendl ; Hagendorff, Thilo ; Van Halem, Irmhild ; Hickman, Eleanore ; Hildt, Elisabeth ; Holm, Sune ; Kararigas, Georgios ; Kringen, Pedro ; Madai, Vince I. ; Wiinblad Mathez, Emilie ; Tithi, Jesmin Jahan ; Westerlund, Magnus ; Wurth, Renee ; Zicari, Roberto V. / Lessons Learned from Assessing Trustworthy AI in Practice. I: Digital Society. 2023 ; Bind 2, Nr. 3.

Bibtex

@article{de4b251449504d20af1c5f2940398f3e,
title = "Lessons Learned from Assessing Trustworthy AI in Practice",
abstract = "Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements. The Z-Inspection{\textregistered} process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI. This article is a methodological reflection on the Z-Inspection{\textregistered} process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system. The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission{\textquoteright}s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.",
author = "Dennis Vetter and Julia Amann and Fr{\'e}d{\'e}rick Bruneault and Megan Coffee and Boris D{\"u}dder and Alessio Gallucci and Gilbert, {Thomas Krendl} and Thilo Hagendorff and {Van Halem}, Irmhild and Eleanore Hickman and Elisabeth Hildt and Sune Holm and Georgios Kararigas and Pedro Kringen and Madai, {Vince I.} and {Wiinblad Mathez}, Emilie and Tithi, {Jesmin Jahan} and Magnus Westerlund and Renee Wurth and Zicari, {Roberto V.}",
year = "2023",
doi = "10.1007/s44206-023-00063-1",
language = "English",
volume = "2",
journal = "Digital Society",
issn = "2731-4650",
publisher = "Springer",
number = "3",

}

RIS

TY - JOUR

T1 - Lessons Learned from Assessing Trustworthy AI in Practice

AU - Vetter, Dennis

AU - Amann, Julia

AU - Bruneault, Frédérick

AU - Coffee, Megan

AU - Düdder, Boris

AU - Gallucci, Alessio

AU - Gilbert, Thomas Krendl

AU - Hagendorff, Thilo

AU - Van Halem, Irmhild

AU - Hickman, Eleanore

AU - Hildt, Elisabeth

AU - Holm, Sune

AU - Kararigas, Georgios

AU - Kringen, Pedro

AU - Madai, Vince I.

AU - Wiinblad Mathez, Emilie

AU - Tithi, Jesmin Jahan

AU - Westerlund, Magnus

AU - Wurth, Renee

AU - Zicari, Roberto V.

PY - 2023

Y1 - 2023

N2 - Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements. The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI. This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system. The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.

AB - Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements. The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI. This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system. The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.

U2 - 10.1007/s44206-023-00063-1

DO - 10.1007/s44206-023-00063-1

M3 - Journal article

VL - 2

JO - Digital Society

JF - Digital Society

SN - 2731-4650

IS - 3

M1 - 35

ER -

ID: 367187390