approved
Seeing without knowing. Limitations of transparency and its application to algorithmic accountability

Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able to know how it works and govern it—a pattern that recurs in recent work about transparency and computational systems. But can “black boxes’ ever be opened, and if so, would that ever be sufficient? In this article, we critically interrogate the ideal of transparency, trace some of its roots in scientific and sociotechnical epistemological cultures, and present 10 limitations to its application. We specifically focus on the inadequacy of transparency for understanding and governing algorithmic systems and sketch an alternative typology of algorithmic accountability grounded in constructive engagements with the limitations of transparency ideals.

Tags
Data and Resources
To access the resources you must log in
  • BibTeXBibTeX

    The resource: 'BibTeX' is not accessible as guest user. You must login to access it!
  • htmlHTML

    The resource: 'html' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Creator Ananny, Mike, ananny@usc.edu
Creator Crawford, Kate
DOI https://doi.org/10.1177/1461444816676645
Group Ethics and Legality
Group Social Impact of AI and explainable ML
Publisher SAGE Publications
Source new media & society 2018, Vol. 20(3) 973–989
Thematic Cluster Other
system:type JournalArticle
Management Info
Field Value
Author Pozzi Giorgia
Maintainer Pozzi Giorgia
Version 1
Last Updated 8 September 2023, 18:20 (CEST)
Created 9 February 2021, 23:47 (CET)