File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Algorithmic fairness and resentment

TitleAlgorithmic fairness and resentment
Authors
KeywordsAlgorithmic ethics
Bias
Fairness
Priors
Resentment
Statistical evidence
Issue Date2023
Citation
Philosophical Studies, 2023 How to Cite?
AbstractIn this paper we develop a general theory of algorithmic fairness. Drawing on Johnson King and Babic’s work on moral encroachment, on Gary Becker’s work on labor market discrimination, and on Strawson’s idea of resentment and indignation as responses to violations of the demand for goodwill toward oneself and others, we locate attitudes to fairness in an agent’s utility function. In particular, we first argue that fairness is a matter of a decision-maker’s relative concern for the plight of people from different groups, rather than of the outcomes produced for different groups. We then show how an agent’s preferences, including in particular their attitudes to error, give rise to their decision thresholds. Tying these points together, we argue that the agent’s relative degrees of concern for different groups manifest in a difference in decision thresholds applied to these groups.
Persistent Identifierhttp://hdl.handle.net/10722/334978
ISSN
2023 Impact Factor: 1.1
2023 SCImago Journal Rankings: 1.203
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorBabic, Boris-
dc.contributor.authorJohnson King, Zoë-
dc.date.accessioned2023-10-20T06:52:10Z-
dc.date.available2023-10-20T06:52:10Z-
dc.date.issued2023-
dc.identifier.citationPhilosophical Studies, 2023-
dc.identifier.issn0031-8116-
dc.identifier.urihttp://hdl.handle.net/10722/334978-
dc.description.abstractIn this paper we develop a general theory of algorithmic fairness. Drawing on Johnson King and Babic’s work on moral encroachment, on Gary Becker’s work on labor market discrimination, and on Strawson’s idea of resentment and indignation as responses to violations of the demand for goodwill toward oneself and others, we locate attitudes to fairness in an agent’s utility function. In particular, we first argue that fairness is a matter of a decision-maker’s relative concern for the plight of people from different groups, rather than of the outcomes produced for different groups. We then show how an agent’s preferences, including in particular their attitudes to error, give rise to their decision thresholds. Tying these points together, we argue that the agent’s relative degrees of concern for different groups manifest in a difference in decision thresholds applied to these groups.-
dc.languageeng-
dc.relation.ispartofPhilosophical Studies-
dc.subjectAlgorithmic ethics-
dc.subjectBias-
dc.subjectFairness-
dc.subjectPriors-
dc.subjectResentment-
dc.subjectStatistical evidence-
dc.titleAlgorithmic fairness and resentment-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1007/s11098-023-02006-5-
dc.identifier.scopuseid_2-s2.0-85168905641-
dc.identifier.eissn1573-0883-
dc.identifier.isiWOS:001060437400001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats