Platform and XUI

The objective of this line is the design and development of a XAI watchdog platform, i.e. an user interface that aims to explain a black-box model by acknowledging different types of explanations for different types of users and providing interactive explanations that the user can navigate. We started the design from the analysis of methods produced in the other research lines and popular approaches from the literature. The objective of the exploration is twofold: (i) identify algorithms and methods to construct the explanations around a bloack box; (ii) build an explanation process where the user can interact with both the black box model and the explanation layer, possibly combining multiple explanation methods with different capabilities. This brings to a platform design consisting of two parts: a software library that integrates a wide set of explanation methods, XAILib; a XUI (eXplainable User Interface) human-computer interface to let the users interact with the explanation layer

XAI-Library

xai lib

The library has the objective of integrating in a coherent platform explanation algorithms developed within the XAI project or published in the literature. The main architecture of the library distinguishes three data types: tabular data; images data; text data. To have a uniform interface for a blackbox to be explained a dedicated wrapper has been designed that will expose all the functionalities required for classify instances from the model. The objective is to define a high-level grammar to setup an explanable analytical pipeline. By design, the library does not make any assumption on the models to be explained, but it relies on a set of interfaces designed around the most diffuse ML libraries (i.e. SciKit Learn, Keras, Tensorflow, Pytorch). For instance, a predict method is shared among the subclasses of the wrapper to adapt to models coming from any of these libraries. The wrapper is also responsible to apply data transformation to the instances to be classified to have a uniform data layer for all the methods. Different explanation methods generate different explanation formats. Thus, we defined a software interface to encapsulate the different explanation formats, by focusing on a classification of capabilities for each explanation. The functionalities we identified are: feature importance, exemplars, counterexemplars, rules, counterfactual rules. An explanation method can provide one or more of these capabilities, by implementing the corresponding method. The design of the library promotes the extension of the repertoire of methodologies with new ones. The interface allows to integrate existing methods and existing implementation (i.e. external explanation methods) easily, providing only the wrapper implementation. At the time of writing the library has been extended with methods proposed by our research team (LORE [GMG2019], ABELE [GMM2019], LASTS [GMS2020]) and taken from the literature (LIME, SHAP, IntGrad, GradCam, NAM, RISE). The library has been exploited to power a few real-world case studies (detailed in the next section). These analytical cases gave us the possibility to prove the validity of the analytical pipeline of the library and to design suitable visual interfaces to deliver the outcome of the explanation to the final user. At the time of writing, the library has been used to create three interfaces for explanation methods in the healthcare domain.


The Cardiac Risk evaluator

The Cardiac Risk evaluator is a model developed by University of Coimbra for evaluating the probability of death for cardiac reasons in patients admitted to the Emergency Room. We developed a visual interface (to be submitted) to provide local explanations for each classified case. The explanation application exploits the LORE method of the library to provide a set of rules and counterfactual rules to give to the practitioner an explanation of the outcome of the model. A web-based visual interface provides the doctor with an interactive module where the specialist may probe the classification model by means of “what-if” queries and explanations. Besides the explanation capabilities, in collaboration with University of Coimbra, the interface introduces a verification-based approach based on model-testing to compute and visualize the confidence for the prediction, so that the user can better ponder the decision of the algorithm. This verification addresses two aspects: (i) a model-checker exploration of the neighborhood of the instance to discover opposite cases; (ii) a theorem prover to check the compliance of the proposed counter rules with a set of prior knowledge constraints of the case. The interface introduces a novel visual-based widget to explore cases related to the instance to be classified as suggested by the rule and counter-rule. A progressive exploration of the space of possibilities is enabled by a visual timeline that summarizes the path of exploration of the doctor, highlighting the progress of the related cases.


Doctor XAI

Doctor XAI [PPP2020] provides an explanation for the prediction of the next most probable diagnoses for a patient, given his/her recent clinical history. We developed a visual interface that exploits the progressive disclosure of information related to a local instance to be classified and explained. The explanation method relies on LORE and brigns evidence to the practitioners about relevant diagnoses and their temporal evolution. The complexity of this information is presented and modulated through a progressive disclosure mechanism, where not all the information is shown at once, but it is sequenced, with advanced features shown only in secondary views and only at the request of the user. This approach allows also to create separate interfaces with different levels of concepts, for example stopping at the first stages for the patient and giving the possibility to explore further for the medical specialist. Not all users need the same amount of information, and providing all information at once may be overwhelming.


ISIC Explanation with ABELE

In [MGY2021] we built a dedicated interface for an explainer, based on ABELE [GMM2019], for a black-box to classify instances of skin lesions images. The interface is developed to help physicians in the diagnosis of skin cancer. Following the principles of using multiple explanation methods, after classifying an instance, users are presented with two different explanation methods. A counterexample that shows an image classified differently, and a set of exemplar images with the same classification.


Research line people

img Guidotti
Riccardo
Guidotti

Assitant Professor
University of Pisa


R.LINE 1 ▪ 3 ▪ 4 ▪ 5

img Rinzivillo
Salvo
Rinzivillo

Researcher
ISTI - CNR Pisa


R.LINE 1 ▪ 3 ▪ 4 ▪ 5

img Fadda
Daniele
Fadda

Researcher
ISTI - CNR Pisa


R.LINE 3

img Naretto
Francesca
Naretto

Post Doctoral Researcher
Scuola Normale


R.LINE 1 ▪ 3 ▪ 4 ▪ 5

img Bodria
Francesco
Bodria

Phd Student
Scuola Normale


R.LINE 1 ▪ 3

img Metta
Carlo
Metta

Researcher
ISTI - CNR Pisa


R.LINE 1 ▪ 2 ▪ 3 ▪4

img Cappuccio
Eleonora
Cappuccio

Phd Student
University of Pisa - Bari


R.LINE 3 ▪ 4

img Malizia
Alessio
Malizia

Associate Professor
University of Pisa


R.LINE 3 ▪ 4

img Ghelli
Giorgio
Ghelli

Full Professor
University of Pisa


R.LINE 3


Line 3 - Publications

2025

  1. Towards Building a Trustworthy RAG-Based Chatbot for the Italian Public Administration
    Chandana Sree Mala, Christian Maio, Mattia Proietti, Gizem Gezici, Fosca Giannotti, and 3 more authors
    Sep 2025
    RESEARCH LINE
  2. MAINLE: a Multi-Agent, Interactive, Natural Language Local Explainer of Classification Tasks
    Paulo Bruno Serafim, Romula Ferrer Filho, STENIO Freitas, Gizem Gezici, Fosca Giannotti, and 2 more authors
    Dec 2025
    RESEARCH LINE

2024

  1. An Interactive Interface for Feature Space Navigation
    Eleonora Cappuccio, Isacco Beretta, Marta Marchiori Manerba, and Salvatore Rinzivillo
    Jun 2024
    RESEARCH LINE
  2. A Frank System for Co-Evolutionary Hybrid Decision-Making
    Federico Mazzoni, Riccardo Guidotti, and Alessio Malizia
    Jun 2024
    RESEARCH LINE
  3. A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
    Luca Pappalardo, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, and 9 more authors
    Dec 2024
    RESEARCH LINE
  4. An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
    Benedetta Muscato, Chandana Sree Mala, Marta Marchiori Manerba, Gizem Gezici, and Fosca Giannotti
    Dec 2024
    RESEARCH LINE

2023

  1. Benchmarking and survey of explanation methods for black box models
    Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and 1 more author
    Data Mining and Knowledge Discovery, Jun 2023
    RESEARCH LINE
  2. Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    International Journal of Data Science and Analytics, Jun 2023
    RESEARCH LINE
  3. Reason to Explain: Interactive Contrastive Explanations (REASONX)
    Laura State, Salvatore Ruggieri, and Franco Turini
    Jun 2023
    RESEARCH LINE
  4. EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories
    Francesca Naretto, Roberto Pellungrini, Salvatore Rinzivillo, and Daniele Fadda
    Jun 2023
    RESEARCH LINE
  5. Demo: an Interactive Visualization Combining Rule-Based and Feature Importance Explanations
    Eleonora Cappuccio, Daniele Fadda, Rosa Lanzilotti, and Salvatore Rinzivillo
    In Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter , Sep 2023
    RESEARCH LINE
  6. Co-design of Human-centered, Explainable AI for Clinical Decision Support
    Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, and 2 more authors
    ACM Transactions on Interactive Intelligent Systems, Dec 2023
    RESEARCH LINE

2022

  1. Explaining Black Box with Visual Exploration of Latent Space
    Bodria, Francesco; Rinzivillo, Salvatore; Fadda, Daniele; Guidotti, Riccardo; Giannotti, and 2 more authors
    Dec 2022
    RESEARCH LINE
  2. User-driven counterfactual generator: a human centered exploration
    Beretta I; Cappuccio E; Marchiori Manerba M
    Dec 2022
    RESEARCH LINE

2020

  1. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations
    Cecilia Panigutti, Alan Perotti, and Dino Pedreschi
    In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , Jan 2020
    RESEARCH LINE

2019

  1. The AI black box explanation problem
    Guidotti Riccardo, Monreale Anna, and Pedreschi Dino
    Dec 2019
    RESEARCH LINE

2018

  1. A Survey of Methods for Explaining Black Box Models
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and 1 more author
    ACM Computing Surveys, Aug 2018
    RESEARCH LINE