Fairness and Abstraction in Sociotechnical Systems

A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process. Bedrock concepts in computer science—such as abstraction and modular design—are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce "fair" outcomes. In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones.

Tags
Data and Resources
To access the resources you must log in
  • BibTeXBibTeX

    The resource: 'BibTeX' is not accessible as guest user. You must login to access it!
  • htmlHTML

    The resource: 'html' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Author Friedler, Sorelle A.
Author Venkatasubramanian, Suresh
Author Selbst, Andrew D., andrew@datasociety.net
Author Boyd, Danah
DOI https://dl.acm.org/doi/10.1145/3287560.3287598
Group Ethics and Legality
Publisher Association for Computing Machinery
Source FAT* '19: Proceedings of the Conference on Fairness, Accountability, and TransparencyJanuary 2019 Pages 59–68
Thematic Cluster Other
system:type ConferencePaper
Management Info
Field Value
Author Pozzi Giorgia
Maintainer Pozzi Giorgia
Version 1
Last Updated 5 March 2021, 12:36 (CET)
Created 9 February 2021, 13:23 (CET)