On (assessing) the fairness of risk score models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Preprint

    Submitted manuscript, 927 KB, PDF document

Recent work on algorithmic fairness has largely focused on the fairness of discrete decisions, or classifications. While such decisions are often based on risk score models, the fairness of the risk models themselves has received considerably less attention. Risk models are of interest for a number of reasons, including the fact that they communicate uncertainty about the potential outcomes to users, thus representing a way to enable meaningful human oversight. Here, we address fairness desiderata for risk score models. We identify the provision of similar epistemic value to different groups as a key desideratum for risk score fairness, and we show how even fair risk scores can lead to unfair risk-based rankings. Further, we address how to assess the fairness of risk score models quantitatively, including a discussion of metric choices and meaningful statistical comparisons between groups. In this context, we also introduce a novel calibration error metric that is less sample size-biased than previously proposed metrics, enabling meaningful comparisons between groups of different sizes. We illustrate our methodology - which is widely applicable in many other settings - in two case studies, one in recidivism risk prediction, and one in risk of major depressive disorder (MDD) prediction.

Original languageEnglish
Title of host publicationProceedings of the 6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
Number of pages13
PublisherAssociation for Computing Machinery, Inc.
Publication date2023
Pages817-829
ISBN (Electronic)9781450372527
DOIs
Publication statusPublished - 2023
Event6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023 - Chicago, United States
Duration: 12 Jun 202315 Jun 2023

Conference

Conference6th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2023
LandUnited States
ByChicago
Periode12/06/202315/06/2023
SeriesACM International Conference Proceeding Series

Bibliographical note

Publisher Copyright:
© 2023 ACM.

    Research areas

  • Algorithmic fairness, Calibration, Ethics, Major depressive disorder, Ranking, Recidivism, Risk scores

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 359976468