-
Grounds for Trust. Essential Epistemic Opacity and Computational Reliabilism
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate... -
How the machine thinks. Understanding opacity in machine learning algorithms
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud... -
Evaluating local explanation methods on ground truth
Evaluating local explanation methods is a difficult task due to the lack of a shared and universally accepted definition of explanation. In the literature, one of the most...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Machine Learning Explainability Via Microaggregation and Shallow Decision Trees
Artificial intelligence (AI) is being deployed in missions that are increasingly critical for human life. To build trust in AI and avoid an algorithm-based authoritarian... -
Explanation in artificial intelligence. Insights from the social sciences
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms.... -
Seeing without knowing. Limitations of transparency and its application to al...
Models for understanding and holding systems accountable have long rested upon ideals and logics of transparency. Being able to see a system is sometimes equated with being able... -
Solving the Black Box Problem. A Normative Framework for Explainable Artifici...
Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial... -
Toward Accountable Discrimination Aware Data Mining
"Big Data" and data-mined inferences are affecting more and more of our lives, and concerns about their possible discriminatory effects are growing. Methods for... -
Fair Prediction with Disparate Impact A Study of Bias in Recidivism Predictio...
Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.... -
Fair Transparent and Accountable Algorithmic Decision making Processes
The Premise, the Proposed Solutions, and the Open Challenges The combination of increased availability of large amounts of fine-grained human behavioral data and advances in... -
Algorithmic Decision Making Based on ML from Big Data. Can Transparency Resto...
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would... -
Algorithmic Decision Making Based on Machine Learning from Big Data
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would... -
Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from ...
We present an approach to explain the decisions of black-box image classifiers through synthetic exemplar and counter-exemplar learnt in the latent feature space. Our...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Multi layered Explanations from Algorithmic Impact Assessments in the GDPR
Impact assessments have received particular attention on both sides of the Atlantic as a tool for implementing algorithmic accountability. The aim of this paper is to address... -
Explanation of Deep Models with Limited Interaction for Trade Secret and Priv...
An ever-increasing number of decisions affecting our lives are made by algorithms. For this reason, algorithmic transparency is becoming a pressing need: automated decisions... -
Explaining Any Time Series Classifier
We present a method to explain the decisions of black box models for time series classification. The explanation consists of factual and counterfactual shapelet-based rules...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Interpretable Next Basket Prediction Boosted with Representative Recipes
Food is an essential element of our lives, cultures, and a crucial part of human experience. The study of food purchases can drive the design of practical services such as...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
Heterogeneous Document Embeddings for Cross-Lingual Text Classification
Funnelling (Fun) is a method for cross-lingual text classification (CLC) based on a two-tier ensemble for heterogeneous transfer learning. In Fun, 1st-tier classifiers, each...-
HTML
The resource: 'Link to Publication' is not accessible as guest user. You must login to access it!
-
HTML
-
A comparative study of fairness enhancing interventions in machine learning
Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups... -
GLocalX - Explaining in a Local to Global setting
GLocalX is a model-agnostic Local to Global explanation algorithm. Given a set of local explanations expressed in the form of decision rules, and a black-box model to explain,...