approved
Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations

We present an approach to explain the decisions of black-box image classifiers through synthetic exemplar and counter-exemplar learnt in the latent feature space. Our explanation method exploits the latent representations learned through an adversarial autoencoder for generating a synthetic neighbourhood of the image for which an explanation is required. A decision tree is trained on a set of images represented in the latent space and its decision rules are used to generate exemplar images showing how the original image can be modified to stay within its class. Counterfactual rules are used to generate counter-exemplars showing how the original image can “morph” into another class. The explanation also comprehends a saliency map highlighting the areas that contribute to its classification, and areas that push it into another class. A wide and deep experimental evaluation proves that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability, besides providing the most useful and interpretable explanations.

Tags
Data and Resources
To access the resources you must log in
Additional Info
Field Value
Creator Guidotti, Riccardo, riccardo.guidotti@unipi.it
Creator Monreale, Anna
Creator Matwin, Stan
Creator Pedreschi, Dino
DOI https://doi.org/10.1609/aaai.v34i09.7116
Group Social Impact of AI and explainable ML
Publisher AAAI
Source AAAI Vol. 34 No. 09: Issue 9: EAAI-20 / AAAI Special Programs
Thematic Cluster Visual Analytics [VA]
system:type ConferencePaper
Management Info
Field Value
Author Wright Joanna
Maintainer Guidotti Riccardo
Version 1
Last Updated 8 September 2023, 18:04 (CEST)
Created 22 February 2021, 14:45 (CET)