Egalitarianism and algorithmic fairness

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Egalitarianism and algorithmic fairness. / Holm, Sune .

In: Philosophy and Technology, Vol. 36, 6, 2023.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Holm, S 2023, 'Egalitarianism and algorithmic fairness', Philosophy and Technology, vol. 36, 6. https://doi.org/10.1007/s13347-023-00607-w

APA

Holm, S. (2023). Egalitarianism and algorithmic fairness. Philosophy and Technology, 36, [6]. https://doi.org/10.1007/s13347-023-00607-w

Vancouver

Holm S. Egalitarianism and algorithmic fairness. Philosophy and Technology. 2023;36. 6. https://doi.org/10.1007/s13347-023-00607-w

Author

Holm, Sune . / Egalitarianism and algorithmic fairness. In: Philosophy and Technology. 2023 ; Vol. 36.

Bibtex

@article{643504aedbb24067b54a94ce56ae53ca,
title = "Egalitarianism and algorithmic fairness",
abstract = "What does it mean for algorithmic classifications to be fair to different socially salient groups? According to classification parity criteria, what is required is equality across groups with respect to some performance measure such as error rates. Critics of classification parity object that classification parity entails that achieving fairness may require us to choose an algorithm that makes no group better off and some groups worse off than an alternative. In this article, I interpret the problem of algorithmic fairness as a case concerning the ethics of the distribution of algorithmic classifications across groups (as opposed to, e.g., the fairness of data collection). I begin with a short introduction of algorithmic fairness as a problem discussed in machine learning. I then show how the criticism raised against classification parity is a form of leveling down objection, and I interpret the egalitarianism of classification parity as deontic egalitarianism. I then discuss a challenge to this interpretation and suggest a revision. Finally, I examine how my interpretation provides proponents of classification parity with a response to the leveling down criticism and how it relates to a recent suggestion to evaluate fairness for automated decision-making systems based on risk and welfare considerations from behind a veil of ignorance.",
author = "Sune Holm",
year = "2023",
doi = "10.1007/s13347-023-00607-w",
language = "English",
volume = "36",
journal = "Philosophy and Technology",
issn = "2210-5433",
publisher = "Springer",

}

RIS

TY - JOUR

T1 - Egalitarianism and algorithmic fairness

AU - Holm, Sune

PY - 2023

Y1 - 2023

N2 - What does it mean for algorithmic classifications to be fair to different socially salient groups? According to classification parity criteria, what is required is equality across groups with respect to some performance measure such as error rates. Critics of classification parity object that classification parity entails that achieving fairness may require us to choose an algorithm that makes no group better off and some groups worse off than an alternative. In this article, I interpret the problem of algorithmic fairness as a case concerning the ethics of the distribution of algorithmic classifications across groups (as opposed to, e.g., the fairness of data collection). I begin with a short introduction of algorithmic fairness as a problem discussed in machine learning. I then show how the criticism raised against classification parity is a form of leveling down objection, and I interpret the egalitarianism of classification parity as deontic egalitarianism. I then discuss a challenge to this interpretation and suggest a revision. Finally, I examine how my interpretation provides proponents of classification parity with a response to the leveling down criticism and how it relates to a recent suggestion to evaluate fairness for automated decision-making systems based on risk and welfare considerations from behind a veil of ignorance.

AB - What does it mean for algorithmic classifications to be fair to different socially salient groups? According to classification parity criteria, what is required is equality across groups with respect to some performance measure such as error rates. Critics of classification parity object that classification parity entails that achieving fairness may require us to choose an algorithm that makes no group better off and some groups worse off than an alternative. In this article, I interpret the problem of algorithmic fairness as a case concerning the ethics of the distribution of algorithmic classifications across groups (as opposed to, e.g., the fairness of data collection). I begin with a short introduction of algorithmic fairness as a problem discussed in machine learning. I then show how the criticism raised against classification parity is a form of leveling down objection, and I interpret the egalitarianism of classification parity as deontic egalitarianism. I then discuss a challenge to this interpretation and suggest a revision. Finally, I examine how my interpretation provides proponents of classification parity with a response to the leveling down criticism and how it relates to a recent suggestion to evaluate fairness for automated decision-making systems based on risk and welfare considerations from behind a veil of ignorance.

U2 - 10.1007/s13347-023-00607-w

DO - 10.1007/s13347-023-00607-w

M3 - Journal article

VL - 36

JO - Philosophy and Technology

JF - Philosophy and Technology

SN - 2210-5433

M1 - 6

ER -

ID: 332995954