Local to global

This is the core scientific/technical activity of the project. The main objective is to understand how to construct meaningful explanations.

Our goal was to “merge” local explanations to reach a global consensus on the reasons for the decisions taken by an AI decision support system.

The research program will articulate the local-first explanation framework along different dimensions: the variety of data sources (relational, text, images, ...), the variety of learning problems (binary classification, multi-label classification, regression, scoring, ranking, ...), the variety of languages for expressing meaningful explanations.

From statistical to causal and mechanistic, physical explanations

In this research line, we aim to integrate slow thinking along three different and possibly complementary directions, namely causality, knowledge injection, and logical reasoning. Orthogonally to these directions, we aim to target slow thinking both internally to the explanation algorithm, that is, to have the explanation algorithm itself think slowly, and by design, that is, to have the black box itself thinking slowly.

XAI Platform

This activity aims to establish the infrastructure for sharing experimental datasets and explanation algorithms with the research community, creating a common ground for researchers working on explanation of black boxes from different domains. A dedicated exploratory (i.e., a virtual research environment) of the H2020 RI So-BigData will be activated, so that a variety of relevant resources, such as data, methods, experimental workflows, platforms and literature, will be managed through the SoBigData e-infrastructure services and made available to the research community through a variety of regulated access policies.

All resources, provided they are not prohibited by specific legal/ethical constraints, will be registered and described within a findable catalogue.

Case studies

This line address two main aspects: 1) the user’s decision-making process with the eXplainable AI systems used to support high stakes decision; 2) use cases to test explanation methods developed in XAI project.

Ethical/legal framework for explanation

The project has a strong ethical motivation. It aims to empower users against undesired, possibly illegal, effects of black-box automated decision-making systems which may harm them, exploit their vulnerabilities, and violate their rights and freedom. This activity covers the interdependencies and feedback among technical, ethical and legal aspects of the research program, and will be pursued in collaboration with scientists from a range of disciplines, including ethical and legal.