approved
Multi layered Explanations from Algorithmic Impact Assessments in the GDPR

Impact assessments have received particular attention on both sides of the Atlantic as a tool for implementing algorithmic accountability. The aim of this paper is to address how Data Protection Impact Assessments (DPIAs) (Art. 35) in the European Union (EU)’s General Data Protection Regulation (GDPR) link the GDPR’s two approaches to algorithmic accountability—individual rights and systemic governance— and potentially lead to more accountable and explainable algorithms. We argue that algorithmic explanation should not be understood as a static statement, but as a circular and multilayered transparency process based on several layers (general information about an algorithm, group-based explanations, and legal justification of individual decisions taken). We argue that the impact assessment process plays a crucial role in connecting internal company heuristics and risk mitigation to outward-facing rights, and in forming the substance of several kinds of explanations.

Tags
Data and Resources
To access the resources you must log in
  • BibTeXBibTeX

    The resource: 'BibTeX' is not accessible as guest user. You must login to access it!
  • htmlHTML

    The resource: 'html' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Creator Kaminski, Margot E.
Creator Malgieri, Gianclaudio
DOI https://doi.org/10.1145/3351095.3372875
Group Ethics and Legality
Group Social Impact of AI and explainable ML
Publisher ACM
Source FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and TransparencyJanuary 2020, Pages 68–79
Thematic Cluster Other
system:type ConferencePaper
Management Info
Field Value
Author Pozzi Giorgia
Maintainer Pozzi Giorgia
Version 1
Last Updated 8 September 2023, 17:59 (CEST)
Created 9 February 2021, 23:57 (CET)