approved
Evaluating local explanation methods on ground truth

Evaluating local explanation methods is a difficult task due to the lack of a shared and universally accepted definition of explanation. In the literature, one of the most common ways to assess the performance of an explanation method is to measure the fidelity of the explanation with respect to the classification of a black box model adopted by an Artificial Intelligent system for making a decision. However, this kind of evaluation only measures the degree of adherence of the local explainer in reproducing the behaviour of the black box classifier with respect to the final decision. Therefore, the explanation provided by the local explainer could be different in the content even though it leads to the same decision of the AI system. In this paper, we propose an approach that allows measuring to which extent the explanations returned by local explanation methods are correct with respect to a synthetic ground truth explanation. Indeed, the the proposed methodology enables the generation of synthetic transparent classifiers for which the reason for the decision taken, i.e., a synthetic ground truth explanation is available by design. Experimental results show how the proposed approach allows to easily evaluate local explanations on the ground truth and to characterize the quality of local explanation methods.

Tags
Data and Resources
To access the resources you must log in
Additional Info
Field Value
Creator Guidotti, Riccardo, riccardo.guidotti@unipi.it
DOI https://doi.org/10.1016/j.artint.2020.103428
Group Social Impact of AI and explainable ML
Publisher ScienceDirect
Source Artificial Intelligence Volume 291, February 2021, 103428
Thematic Cluster Web Analytics [WA]
system:type JournalArticle
Management Info
Field Value
Author Wright Joanna
Maintainer Guidotti Riccardo
Version 1
Last Updated 8 September 2023, 18:28 (CEST)
Created 4 February 2021, 14:32 (CET)