To explain or not to explain? Artificial intelligence explainability in clinical decision support systems
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
To explain or not to explain? Artificial intelligence explainability in clinical decision support systems. / Amann, Julia; Vetter, Dennis; Blomberg, Stig Nikolaj; Christensen, Helle Collatz; Coffee, Megan; Gerke, Sara; Gilbert, Thomas K.; Hagendorff, Thilo; Holm, Sune; Livne, Michelle; Spezzatti, Andy; Strümke, Inga; Zicari, Roberto V.; Madai, Vince Istvan.
In: PLOS Digital Health, Vol. 1, No. 2, e0000016, 2022.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - To explain or not to explain? Artificial intelligence explainability in clinical decision support systems
AU - Amann, Julia
AU - Vetter, Dennis
AU - Blomberg, Stig Nikolaj
AU - Christensen, Helle Collatz
AU - Coffee, Megan
AU - Gerke, Sara
AU - Gilbert, Thomas K.
AU - Hagendorff, Thilo
AU - Holm, Sune
AU - Livne, Michelle
AU - Spezzatti, Andy
AU - Strümke, Inga
AU - Zicari, Roberto V.
AU - Madai, Vince Istvan
PY - 2022
Y1 - 2022
N2 - Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.
AB - Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.
U2 - 10.1371/journal.pdig.0000016
DO - 10.1371/journal.pdig.0000016
M3 - Journal article
C2 - 36812545
VL - 1
JO - PLOS Digital Health
JF - PLOS Digital Health
SN - 2767-3170
IS - 2
M1 - e0000016
ER -
ID: 297017502