How to Assess Trustworthy AI in Practice

Research output: Working paperPreprintResearch

Documents

  • Roberto V. Zicari
  • Julia Amann
  • Frédérick Bruneault
  • Megan Coffee
  • Düdder, Boris
  • Alessio Gallucci
  • Thomas Krendl Gilbert
  • Thilo Hagendorff
  • Irmhild van Halem
  • Eleanore Hickman
  • Elisabeth Hildt
  • Holm, Sune Hannibal
  • Georgios Kararigas
  • Pedro Kringen
  • Vince I. Madai
  • Emilie Wiinblad Mathez
  • Jesmin Jahan Tithi
  • Dennis Vetter
  • Magnus Westerlund
  • Renee Wurth
This report is a methodological reflection on Z-Inspection$^{\small{\circledR}}$. Z-Inspection$^{\small{\circledR}}$ is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.
Original languageEnglish
Publisherarxiv.org
Number of pages52
DOIs
Publication statusPublished - 20 Jun 2022

Bibliographical note

On behalf of the Z-Inspection$^{\small{\circledR}}$ initiative (2022)

    Research areas

  • cs.CY, Trustworthy, Artificial Intelligence, Machine Learning, Society, Law & Technonology

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 314388253