The XAI project faces the challenge of requiring AI to be explainable and understandable in human terms and articulates its research along 5 Research Activities: 1 algorithms to infer local explanations and their generalization to global ones (post-hoc) and algorithms that are transparent by-design; 2 languages for expressing explanations in terms of logic rules, with statistical and causal interpretation; 3 XAI watchdog platform for sharing experimental dataset and explanation algorithms; 4 a repertoire of case studies aimed at in involving also final users ; 5 a framework to study the interplay between XAI and ethical and legal dimensions.
Local to global
This is the core scientific/technical activity of the project. The main objective is to understand how to construct meaningful explanations.
Our goal was to “merge” local explanations to reach a global consensus on the reasons for the decisions taken by an AI decision support system.
The research program will articulate the local-first explanation framework along different dimensions: the variety of data sources (relational, text, images, ...), the variety of learning problems (binary classification, multi-label classification, regression, scoring, ranking, ...), the variety of languages for expressing meaningful explanations.
From statistical to causal and mechanistic, physical explanations
In this research line, we aim to integrate slow thinking along three different and possibly complementary directions, namely causality, knowledge injection, and logical reasoning. Orthogonally to these directions, we aim to target slow thinking both internally to the explanation algorithm, that is, to have the explanation algorithm itself think slowly, and by design, that is, to have the black box itself thinking slowly.
This activity aims to establish the infrastructure for sharing experimental datasets and explanation algorithms with the research community, creating a common ground for researchers working on explanation of black boxes from different domains. A dedicated exploratory (i.e., a virtual research environment) of the H2020 RI So-BigData will be activated, so that a variety of relevant resources, such as data, methods, experimental workflows, platforms and literature, will be managed through the SoBigData e-infrastructure services and made available to the research community through a variety of regulated access policies.
All resources, provided they are not prohibited by specific legal/ethical constraints, will be registered and described within a findable catalogue.
Ethical/legal framework for explanation
The project has a strong ethical motivation. It aims to empower users against undesired, possibly illegal, effects of black-box automated decision-making systems which may harm them, exploit their vulnerabilities, and violate their rights and freedom. This activity covers the interdependencies and feedback among technical, ethical and legal aspects of the research program, and will be pursued in collaboration with scientists from a range of disciplines, including ethical and legal.