approved
Explaining Any Time Series Classifier

We present a method to explain the decisions of black box models for time series classification. The explanation consists of factual and counterfactual shapelet-based rules revealing the reasons for the classification, and of a set of exemplars and counter-exemplars highlighting similarities and differences with the time series under analysis. The proposed method first generates exemplar and counter-exemplar time series in the latent feature space and learns a local latent decision tree classifier. Then, it selects and decodes those respecting the decision rules explaining the decision. Finally, it learns on them a shapelet-tree that reveals the parts of the time series that must, and must not, be contained for getting the returned outcome from the black box. A wide experimentation shows that the proposed method provides faithful, meaningful and interpretable explanations.

Tags
Data and Resources
To access the resources you must log in
Additional Info
Field Value
Creator Guidotti, Riccardo, riccardo.guidotti@unipi.it
Creator Monreale, Anna, anna.monreale@unipi.it
Creator Spinnato, Francesco, francesco.spinnato@sns.it
Creator Pedreschi, Dino, dino.pedreschi@unipi.it
Creator Giannotti, Fosca, fosca.giannotti@isti.cnr.it
DOI 10.1109/CogMI50398.2020.00029
Group Social Impact of AI and explainable ML
Publisher 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)
Source 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) 28-31 Oct 2020
Thematic Cluster Social Data [SD]
system:type ConferencePaper
Management Info
Field Value
Author Wright Joanna
Maintainer Guidotti Riccardo
Version 1
Last Updated 8 September 2023, 17:40 (CEST)
Created 22 March 2021, 14:40 (CET)