-
Proposal for a Regulation of the European Parliament and the Council laying d...
Our remarks focus on two main issues: 1) providing operational tools to link the ethics and the legal dimension of a Trustworthy AI avoiding risks of ethics washing; 2) the... -
The Role of the GDPR in Designing the European Strategy on Artificial Intelli...
Starting from an analysis of the EU Reg. n. 2016/679 on General Data Protection Regulation (GDPR), the Author deals with the opportunity to translate the current strategies on...-
PDF
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
PDF
-
Legal Materials as Big Data: (algo)Rithms Support Legal Interpretation. A Dia...
This webinar, which took place on 6 July 2021, focused on the interplay between legal data and data science. The webinar, entitled ‘Legal Materials as Big Data: (algo)Rithms to...-
.webloc
The resource: 'Webinar Link' is not accessible as guest user. You must login to access it!
-
.webloc
-
Second SoBigData Plus Plus Awareness Panel R. I. Platforms Data Part 1
This webinar, which took place on 10 November 2020, was aimed at exploring the theme of data protection and intellectual property issues in platforms. The first speaker was...-
.webloc
The resource: 'Link to the webinar' is not accessible as guest user. You must login to access it!
-
.webloc
-
Privlib
Privlib is a Python software package to manage privacy risk and discrimination in tabular and sequential data. It comprises methods to assess privacy risk (PRUDEnce) and... -
Papers on Gender Bias in Academic Promotions
This dataset contains the result of a systematic mapping study conducted to analyse how the issue of gender bias in academic promotions has been addressed by the literature....-
CSV
The resource: 'Dataset' is not accessible as guest user. You must login to access it!
-
CSV
-
MANILA
MANILA is a low-code web application to support the specification and execution of machine learning fairness evaluations. In particular, through MANILA it is possible to... -
Democratizing Quality-Based Machine Learning Development through Extended Fea...
ML systems have become an essential tool for experts of many domains, data scientists and researchers, allowing them to find answers to many complex business questions...-
bibtex
The resource: 'Bibtex entry' is not accessible as guest user. You must login to access it!
-
The resource: 'Link to the paper' is not accessible as guest user. You must login to access it!
-
bibtex
-
Grounds for Trust. Essential Epistemic Opacity and Computational Reliabilism
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate... -
How the machine thinks. Understanding opacity in machine learning algorithms
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud... -
Machine Learning Explainability Via Microaggregation and Shallow Decision Trees
Artificial intelligence (AI) is being deployed in missions that are increasingly critical for human life. To build trust in AI and avoid an algorithm-based authoritarian... -
Seeing without knowing. Limitations of transparency and its application to al...
Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able... -
Solving the Black Box Problem. A Normative Framework for Explainable Artifici...
Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial... -
Toward Accountable Discrimination Aware Data Mining
"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for... -
Fair Prediction with Disparate Impact A Study of Bias in Recidivism Predictio...
Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.... -
Algorithmic Decision Making Based on ML from Big Data. Can Transparency Resto...
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would... -
Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exem...
We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Multi layered Explanations from Algorithmic Impact Assessments in the GDPR
Impact assessments have received particular attention on both sides of the Atlantic as a tool for implementing algorithmic accountability. The aim of this paper is to address... -
Explanation of Deep Models with Limited Interaction for Trade Secret and Priv...
An ever-increasing number of decisions affecting our lives are made by algorithms. For this reason, algorithmic transparency is becoming a pressing need: automated decisions... -
Machine Learning Explainability Through Comprehensible Decision Trees
The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation...