Egalitarianism and algorithmic fairness

Research output: Contribution to journalJournal articlepeer-review

Documents

  • Full text

    Accepted author manuscript, 741 KB, PDF document

What does it mean for algorithmic classifications to be fair to different socially salient groups? According to classification parity criteria, what is required is equality across groups with respect to some performance measure such as error rates. Critics of classification parity object that classification parity entails that achieving fairness may require us to choose an algorithm that makes no group better off and some groups worse off than an alternative. In this article, I interpret the problem of algorithmic fairness as a case concerning the ethics of the distribution of algorithmic classifications across groups (as opposed to, e.g., the fairness of data collection). I begin with a short introduction of algorithmic fairness as a problem discussed in machine learning. I then show how the criticism raised against classification parity is a form of leveling down objection, and I interpret the egalitarianism of classification parity as deontic egalitarianism. I then discuss a challenge to this interpretation and suggest a revision. Finally, I examine how my interpretation provides proponents of classification parity with a response to the leveling down criticism and how it relates to a recent suggestion to evaluate fairness for automated decision-making systems based on risk and welfare considerations from behind a veil of ignorance.
Original languageEnglish
Article number6
JournalPhilosophy and Technology
Volume36
Number of pages18
ISSN2210-5433
DOIs
Publication statusPublished - 2023

ID: 332995954