Research lines

1

LINE

Local to global

This is the core scientific/technical activity of the project. The main objective is to understand how to construct meaningful explanations. The framework works under three assumptions.

Logic explanation; Local explanations; Explanation composition

The research program will articulate the local-first explanation framework along different dimensions: the variety of data sources (relational, text, images, ...), the variety of learning problems (binary classification, multi-label classification, regression, scoring, ranking, ...), the variety of languages for expressing meaningful explanations.

2

LINE

From statistical to causal and mechanistic, physical explanations

This is the core scientific/technical activity of the project. The main objective is to understand how to construct meaningful explanations. The framework works under three assumptions.

Logic explanation; Local explanations; Explanation composition

The research program will articulate the local-first explanation framework along different dimensions: the variety of data sources (relational, text, images, ...), the variety of learning problems (binary classification, multi-label classification, regression, scoring, ranking, ...), the variety of languages for expressing meaningful explanations.

3

LINE

XAI Platform

This activity aims to establish the infrastructure for sharing experimental datasets and explanation algorithms with the research community, creating a common ground for researchers working on explanation of black boxes from different domains. A dedicated exploratory (i.e., a virtual research environment) of the H2020 RI So-BigData will be activated, so that a variety of relevant resources: data, methods, experimental workflows, platforms and literature will be managed through the SoBigData e-infrastructure services and made available to the research community through a variety of regulated access policies.

All resources, provided they are not prohibited by specific legal/ethical constraints, will be registered and described within a findable catalogue.

4

LINE

Ethical/legal framework for explanation

The project has a strong ethical motivation. It aims to empower users against undesired, possibly illegal, effects of black-box automated decision-making systems which may harm them, exploit their vulnerabilities, and violate their rights and freedom. This activity covers the interdependencies and feedback among technical, ethical and legal aspects of the research program, and will be pursued in collaboration with scientists from a range of disciplines, including ethical and legal, that already enthusiastically agreed, such as the legal scholar Giovanni Comande (Scuola Sant'Anna Pisa) and the ethical philosopher Jeroen Van Den Hoven (TU Delft).

5

LINE

Case studies

The activity aims at validating the approach and the framework involving real users on many case studies relying both on “in house" big data sources, made available through SoBigData.eu, and on challenges put forward by our industrial and institutional partners and by the scientific community at large. Planned cases include:

1. Health

Study the challenging explanation problems in the medical domain, aiming at systems such as DoctorAI, based on a multi-label Recurrent Neural Network trained on patients' Electronic Health Records. The objective is to devise explanations for "extreme" multi-label classification, also delivering causal explanations.

2. Fiscal fraud detection

Validate the expressiveness of the explanations in the context of Fiscal Fraud detection systems, developed during the long lasting collaboration with the Italian Revenue Agency.