approved
LORE

The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, a black box agnostic method able to provide interpretable and faithful explanations. LORE first learns a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, proactively suggesting the changes in the instance features that lead to a different outcome. Wide experiments show that LORE outperforms state of the art methods both in the quality of explanations and in the accuracy in mimicking the black box.

Tags
Data and Resources
To access the resources you must log in
  • LORE codepython

    The resource: 'LORE code' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Accessibility Both
AccessibilityMode Download
Availability On-Line
Basic rights Download
CreationDate 2019-01-01
Creator Guidotti, Riccardo
Field/Scope of use Any use
Group Social Impact of AI and explainable ML
Owner Guidotti, Riccardo
Sublicense rights No
Territory of use World Wide
Thematic Cluster Social Data [SD]
UsageMode Download
system:type Method
Management Info
Field Value
Author Trasarti Roberto
Maintainer Trasarti Roberto
Version 1
Last Updated 8 September 2023, 14:47 (CEST)
Created 26 January 2019, 23:56 (CET)