Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Deny, dismiss and downplay : developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. / Duke, Shaul A.

In: Ethics and Information Technology, Vol. 24, No. 1, 1, 03.2022.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Duke, SA 2022, 'Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI', Ethics and Information Technology, vol. 24, no. 1, 1. https://doi.org/10.1007/s10676-022-09627-0

APA

Duke, S. A. (2022). Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. Ethics and Information Technology, 24(1), [1]. https://doi.org/10.1007/s10676-022-09627-0

Vancouver

Duke SA. Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. Ethics and Information Technology. 2022 Mar;24(1). 1. https://doi.org/10.1007/s10676-022-09627-0

Author

Duke, Shaul A. / Deny, dismiss and downplay : developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. In: Ethics and Information Technology. 2022 ; Vol. 24, No. 1.

Bibtex

@article{84a030710adb4c70aff40e18fbb1ba1e,
title = "Deny, dismiss and downplay: developers{\textquoteright} attitudes towards risk and their role in risk creation in the field of healthcare-AI",
abstract = "Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies{\textquoteright} stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers{\textquoteright} attitudes.",
keywords = "Affected stakeholders, Artificial intelligence, Automation, Developers, Healthcare, Risk",
author = "Duke, {Shaul A.}",
note = "Funding Information: Author would like to thank David S. Jones, Joost van Loon, Klaus Hoeyer, Zeev Rosenhek, Amy Fairchild, Dani Filc, and the two anonymous reviewers for their helpful comments on earlier drafts of this article. Special thanks to Lauren Duke for her valuable insights and assistance. Publisher Copyright: {\textcopyright} 2022, The Author(s), under exclusive licence to Springer Nature B.V.",
year = "2022",
month = mar,
doi = "10.1007/s10676-022-09627-0",
language = "English",
volume = "24",
journal = "Ethics and Information Technology",
issn = "1388-1957",
publisher = "Springer",
number = "1",

}

RIS

TY - JOUR

T1 - Deny, dismiss and downplay

T2 - developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI

AU - Duke, Shaul A.

N1 - Funding Information: Author would like to thank David S. Jones, Joost van Loon, Klaus Hoeyer, Zeev Rosenhek, Amy Fairchild, Dani Filc, and the two anonymous reviewers for their helpful comments on earlier drafts of this article. Special thanks to Lauren Duke for her valuable insights and assistance. Publisher Copyright: © 2022, The Author(s), under exclusive licence to Springer Nature B.V.

PY - 2022/3

Y1 - 2022/3

N2 - Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies’ stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers’ attitudes.

AB - Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies’ stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers’ attitudes.

KW - Affected stakeholders

KW - Artificial intelligence

KW - Automation

KW - Developers

KW - Healthcare

KW - Risk

U2 - 10.1007/s10676-022-09627-0

DO - 10.1007/s10676-022-09627-0

M3 - Journal article

AN - SCOPUS:85124014204

VL - 24

JO - Ethics and Information Technology

JF - Ethics and Information Technology

SN - 1388-1957

IS - 1

M1 - 1

ER -

ID: 318110180