XAI Project

Science and technology
for the eXplanation of AI decision making

Imagine that a wealthy friend of yours asks for a vacation credit card to his bank, to discover that the credit he is offered is very low. The bank teller cannot explain why. Your stubborn friend continues his quest for explanation up to the bank executives, to discover that an algorithm lowered his credit score. Why? After a long investigation, it turns out that the reason is: bad credit by the former owner of your friend’s house.


Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user's features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions.

XAI project focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-to-global framework for black box explanation, articulated along three lines:

  1. The language for expressing explanations in terms of expressive logic rules, with statistical and causal interpretation;
  2. The inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance;
  3. The bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility.

An intertwined line of research will investigate i) causal explanations, i.e., models that capture the causal relationships among the (endogenous and exogenous) variables and the decision, and ii) mechanistic/physical models that capture the detailed data generation behavior behind specific deep learning models, by means of the tools of statistical physics of complex systems.

This project will also develop:

  1. an explanation infrastructure for benchmarking the methods developed within and outside the project, equipped with platforms for the users' assessment of the explanations and the crowdsensing of observational decision data;
  2. an ethical-legal framework, both for compliance and impact of our developed methods on current legal standards and on the right of explanation" provisions of the GDPR; and
  3. a repertoire of case studies in explanation-by-design, with a priority in health and fraud detection applications.