To explain or not to explain? Artificial intelligence explainability in clinical decision support systems

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

To explain or not to explain? Artificial intelligence explainability in clinical decision support systems. / Amann, Julia; Vetter, Dennis; Blomberg, Stig Nikolaj; Christensen, Helle Collatz; Coffee, Megan; Gerke, Sara; Gilbert, Thomas K.; Hagendorff, Thilo; Holm, Sune; Livne, Michelle; Spezzatti, Andy; Strümke, Inga; Zicari, Roberto V.; Madai, Vince Istvan.

In: PLOS Digital Health, Vol. 1, No. 2, e0000016, 2022.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Amann, J, Vetter, D, Blomberg, SN, Christensen, HC, Coffee, M, Gerke, S, Gilbert, TK, Hagendorff, T, Holm, S, Livne, M, Spezzatti, A, Strümke, I, Zicari, RV & Madai, VI 2022, 'To explain or not to explain? Artificial intelligence explainability in clinical decision support systems', PLOS Digital Health, vol. 1, no. 2, e0000016. https://doi.org/10.1371/journal.pdig.0000016

APA

Amann, J., Vetter, D., Blomberg, S. N., Christensen, H. C., Coffee, M., Gerke, S., Gilbert, T. K., Hagendorff, T., Holm, S., Livne, M., Spezzatti, A., Strümke, I., Zicari, R. V., & Madai, V. I. (2022). To explain or not to explain? Artificial intelligence explainability in clinical decision support systems. PLOS Digital Health, 1(2), [e0000016]. https://doi.org/10.1371/journal.pdig.0000016

Vancouver

Amann J, Vetter D, Blomberg SN, Christensen HC, Coffee M, Gerke S et al. To explain or not to explain? Artificial intelligence explainability in clinical decision support systems. PLOS Digital Health. 2022;1(2). e0000016. https://doi.org/10.1371/journal.pdig.0000016

Author

Amann, Julia ; Vetter, Dennis ; Blomberg, Stig Nikolaj ; Christensen, Helle Collatz ; Coffee, Megan ; Gerke, Sara ; Gilbert, Thomas K. ; Hagendorff, Thilo ; Holm, Sune ; Livne, Michelle ; Spezzatti, Andy ; Strümke, Inga ; Zicari, Roberto V. ; Madai, Vince Istvan. / To explain or not to explain? Artificial intelligence explainability in clinical decision support systems. In: PLOS Digital Health. 2022 ; Vol. 1, No. 2.

Bibtex

@article{d44f8639c4e04aa886b03b391e198f08,
title = "To explain or not to explain? Artificial intelligence explainability in clinical decision support systems",
abstract = "Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.",
author = "Julia Amann and Dennis Vetter and Blomberg, {Stig Nikolaj} and Christensen, {Helle Collatz} and Megan Coffee and Sara Gerke and Gilbert, {Thomas K.} and Thilo Hagendorff and Sune Holm and Michelle Livne and Andy Spezzatti and Inga Str{\"u}mke and Zicari, {Roberto V.} and Madai, {Vince Istvan}",
year = "2022",
doi = "10.1371/journal.pdig.0000016",
language = "English",
volume = "1",
journal = "PLOS Digital Health",
issn = "2767-3170",
publisher = "Public Library of Science",
number = "2",

}

RIS

TY - JOUR

T1 - To explain or not to explain? Artificial intelligence explainability in clinical decision support systems

AU - Amann, Julia

AU - Vetter, Dennis

AU - Blomberg, Stig Nikolaj

AU - Christensen, Helle Collatz

AU - Coffee, Megan

AU - Gerke, Sara

AU - Gilbert, Thomas K.

AU - Hagendorff, Thilo

AU - Holm, Sune

AU - Livne, Michelle

AU - Spezzatti, Andy

AU - Strümke, Inga

AU - Zicari, Roberto V.

AU - Madai, Vince Istvan

PY - 2022

Y1 - 2022

N2 - Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.

AB - Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.

U2 - 10.1371/journal.pdig.0000016

DO - 10.1371/journal.pdig.0000016

M3 - Journal article

C2 - 36812545

VL - 1

JO - PLOS Digital Health

JF - PLOS Digital Health

SN - 2767-3170

IS - 2

M1 - e0000016

ER -

ID: 297017502