Science and technology for the explanation of ai decision making.

The XAI project, focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems in the context of ai based decision making, aiming at empowering individual against undesired effects of automated decision making, implementing the “right of explanation”, helping people make better decisions preserving (and expand) human autonomy.

Black box AI systems for automated decision making, often based on ML over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic both for the lack of transparency and also for possible biases inherited by the algorithms from prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines, requiring good communication, trust, clarity, and understanding.

    This project aims at developing:
  1. an explanation infrastructure for benchmarking, equipped with platforms for the users' assessment of the explanations;
  2. an ethical-legal framework, in compliance with the provisions of the GDPR;
  3. a repertoire of case studies in explanation-by-design, mainly focused on health and fraud detection applications.

Last News

Events, tutorials, round tables, conferences and more...



Scuola Normale Superiore



National Research Council


University of Pisa

Department of Computer Science

The 5 XAI
research lines