approved
Explaining Explanation Methods

The most effective Artificial Intelligence (AI) systems exploit complex machine learning models to fulfill their tasks due to their high performance. Unfortunately, the most effective machine learning models use for their decision processes a logic not understandable from humans that makes them real black-box models. The lack of transparency on how AI systems make decisions is a clear limitation in their adoption in safety-critical and socially sensitive contexts. Consequently, since the applications in which AI are employed are various, research in eXplainable AI (XAI) has recently caught much attention, with specific distinct requirements for different types of explanations for different users. In this webinar, we briefly present the existing explanation problems, the main strategies adopted to solve them, and the most common types of explanations are illustrated with references to state-of-the-art explanation methods able to retrieve them. A short tutorial shows how to employ existing explanation libraries on tabular datasets.

Tags
Data and Resources
To access the resources you must log in
  • Explaining Explanation MethodsHTML

    The most effective Artificial Intelligence (AI) systems exploit complex...

    The resource: 'Explaining Explanation Methods' is not accessible as guest user. You must login to access it!
Additional Info
Field Value
Availability On-Line
Course Explaining Explanation Methods
Length 75 minutes
Lesson number 1
Prerequisites NO
Provider Institution CNR
Target users PhD Students
Thematic Cluster Social Data [SD]
Training material typology Other
system:type TrainingMaterial
Management Info
Field Value
Author Rapisarda Beatrice
Maintainer Rapisarda Beatrice
Version 1
Last Updated 19 July 2022, 16:30 (CEST)
Created 25 November 2020, 09:52 (CET)