Algorithmic Decision Making Based on Machine Learning from Big Data

Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (gaming the system in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are inherently opaque. It is concluded that, at least presently, full transparency for oversight bodies alone is the only feasible option; extending it to the public at large is normally not advisable. Moreover, it is argued that algorithmic decisions preferably should become more understandable; to that effect, the models of machine learning to be employed should either be interpreted ex post or be interpretable by design ex ante.

Data and Resources
To access the resources you must log in
  • htmlHTML

    The resource: 'html' is not accessible as guest user. You must login to access it!
  • BibTeXBibTeX

    The resource: 'BibTeX' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Creator de Laat, Paul B.,,
Group Social Impact of AI and explainable ML
Publisher Springer
Source Philosophy & Technology volume 31, pages 525–541 (2018)
Thematic Cluster Other
system:type JournalArticle
Management Info
Field Value
Author Pozzi Giorgia
Maintainer Pozzi Giorgia
Version 1
Last Updated 8 September 2023, 18:05 (CEST)
Created 3 March 2021, 18:51 (CET)