How to Assess Trustworthy AI in Practice
Research output: Working paper › Preprint › Research
Documents
- How to Assess Trustworthy AI in Practice
Accepted author manuscript, 465 KB, PDF document
This report is a methodological reflection on Z-Inspection$^{\small{\circledR}}$. Z-Inspection$^{\small{\circledR}}$ is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.
Original language | English |
---|---|
Publisher | arxiv.org |
Number of pages | 52 |
DOIs | |
Publication status | Published - 20 Jun 2022 |
Bibliographical note
On behalf of the Z-Inspection$^{\small{\circledR}}$ initiative (2022)
- cs.CY, Trustworthy, Artificial Intelligence, Machine Learning, Society, Law & Technonology
Research areas
Number of downloads are based on statistics from Google Scholar and www.ku.dk
No data available
ID: 314388253