Bias in context: What to do when complete bias removal is not an option

Research output: Contribution to journalLetterResearchpeer-review

Documents

  • Fulltext

    Final published version, 113 KB, PDF document

It is widely recognized that machine learning algorithms may be biased in the sense that they perform worse on some demographic groups than others. This motivates algorithmic development to remove algorithmic bias, which in turn might lead to a hope—even an expectation—that algorithmic bias can be mitigated or removed (1). In this short comment, we make three points to qualify Wang et al.’s suggestion: 1) It may not be possible for algorithms to perform equally well across groups on all measures, 2) which inequalities count as morally unacceptable bias is an ethical question, and 3) the answer to the ethical question will vary across decision contexts.
Original languageEnglish
Article numbere2304710120
JournalProceedings of the National Academy of Sciences of the United States of America
Volume120
Issue number23
Number of pages1
ISSN0027-8424
DOIs
Publication statusPublished - 2023

    Research areas

  • Bias

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 348163796