How to Assess Trustworthy AI in Practice

Publikation: Working paperPreprintForskning

Dokumenter

  • Roberto V. Zicari
  • Julia Amann
  • Frédérick Bruneault
  • Megan Coffee
  • Düdder, Boris
  • Alessio Gallucci
  • Thomas Krendl Gilbert
  • Thilo Hagendorff
  • Irmhild van Halem
  • Eleanore Hickman
  • Elisabeth Hildt
  • Holm, Sune Hannibal
  • Georgios Kararigas
  • Pedro Kringen
  • Vince I. Madai
  • Emilie Wiinblad Mathez
  • Jesmin Jahan Tithi
  • Dennis Vetter
  • Magnus Westerlund
  • Renee Wurth
This report is a methodological reflection on Z-Inspection$^{\small{\circledR}}$. Z-Inspection$^{\small{\circledR}}$ is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.
OriginalsprogEngelsk
Udgiverarxiv.org
Antal sider52
DOI
StatusUdgivet - 20 jun. 2022

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 314388253