XAI Platform

3

LINE

The objective of this line is the design and development of a XAI watchdog platform, i.e. an user interface that aims to explain a black-box model by acknowledging different types of explanations for different types of users and providing interactive explanations that the user can navigate. We started the design from the analysis of methods produced in the other research lines and popular approaches from the literature. The objective of the exploration is twofold: (i) identify algorithms and methods to construct the explanations around a bloack box; (ii) build an explanation process where the user can interact with both the black box model and the explanation layer, possibly combining multiple explanation methods with different capabilities. This brings to a platform design consisting of two parts: a software library that integrates a wide set of explanation methods, XAILib; a XUI (eXplainable User Interface) human-computer interface to let the users interact with the explanation layer

XAI-Library
The library has the objective of integrating in a coherent platform explanation algorithms developed within the XAI project or published in the literature. The main architecture of the library distinguishes three data types: tabular data; images data; text data. To have a uniform interface for a blackbox to be explained a dedicated wrapper has been designed that will expose all the functionalities required for classify instances from the model. The objective is to define a high-level grammar to setup an explanable analytical pipeline. By design, the library does not make any assumption on the models to be explained, but it relies on a set of interfaces designed around the most diffuse ML libraries (i.e. SciKit Learn, Keras, Tensorflow, Pytorch). For instance, a predict method is shared among the subclasses of the wrapper to adapt to models coming from any of these libraries. The wrapper is also responsible to apply data transformation to the instances to be classified to have a uniform data layer for all the methods. Different explanation methods generate different explanation formats. Thus, we defined a software interface to encapsulate the different explanation formats, by focusing on a classification of capabilities for each explanation. The functionalities we identified are: feature importance, exemplars, counterexemplars, rules, counterfactual rules. An explanation method can provide one or more of these capabilities, by implementing the corresponding method. The design of the library promotes the extension of the repertoire of methodologies with new ones. The interface allows to integrate existing methods and existing implementation (i.e. external explanation methods) easily, providing only the wrapper implementation. At the time of writing the library has been extended with methods proposed by our research team (LORE [GMG2019], ABELE [GMM2019], LASTS [GMS2020]) and taken from the literature (LIME, SHAP, IntGrad, GradCam, NAM, RISE). The library has been exploited to power a few real-world case studies (detailed in the next section). These analytical cases gave us the possibility to prove the validity of the analytical pipeline of the library and to design suitable visual interfaces to deliver the outcome of the explanation to the final user. At the time of writing, the library has been used to create three interfaces for explanation methods in the healthcare domain.
The Cardiac Risk evaluator
The Cardiac Risk evaluator is a model developed by University of Coimbra for evaluating the probability of death for cardiac reasons in patients admitted to the Emergency Room. We developed a visual interface (to be submitted) to provide local explanations for each classified case. The explanation application exploits the LORE method of the library to provide a set of rules and counterfactual rules to give to the practitioner an explanation of the outcome of the model. A web-based visual interface provides the doctor with an interactive module where the specialist may probe the classification model by means of “what-if” queries and explanations. Besides the explanation capabilities, in collaboration with University of Coimbra, the interface introduces a verification-based approach based on model-testing to compute and visualize the confidence for the prediction, so that the user can better ponder the decision of the algorithm. This verification addresses two aspects: (i) a model-checker exploration of the neighborhood of the instance to discover opposite cases; (ii) a theorem prover to check the compliance of the proposed counter rules with a set of prior knowledge constraints of the case. The interface introduces a novel visual-based widget to explore cases related to the instance to be classified as suggested by the rule and counter-rule. A progressive exploration of the space of possibilities is enabled by a visual timeline that summarizes the path of exploration of the doctor, highlighting the progress of the related cases.
Doctor XAI
Doctor XAI [PPP2020] provides an explanation for the prediction of the next most probable diagnoses for a patient, given his/her recent clinical history. We developed a visual interface that exploits the progressive disclosure of information related to a local instance to be classified and explained. The explanation method relies on LORE and brigns evidence to the practitioners about relevant diagnoses and their temporal evolution. The complexity of this information is presented and modulated through a progressive disclosure mechanism, where not all the information is shown at once, but it is sequenced, with advanced features shown only in secondary views and only at the request of the user. This approach allows also to create separate interfaces with different levels of concepts, for example stopping at the first stages for the patient and giving the possibility to explore further for the medical specialist. Not all users need the same amount of information, and providing all information at once may be overwhelming.
ISIC Explanation with ABELE
In [MGY2021] we built a dedicated interface for an explainer, based on ABELE [GMM2019], for a black-box to classify instances of skin lesions images. The interface is developed to help physicians in the diagnosis of skin cancer. Following the principles of using multiple explanation methods, after classifying an instance, users are presented with two different explanation methods. A counterexample that shows an image classified differently, and a set of exemplar images with the same classification.

Publications

1.

[GMR2018]
A Survey of Methods for Explaining Black Box Models
Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore, Turini Franco, Giannotti Fosca, Pedreschi Dino (2022) - ACM Computing Surveys. In ACM computing surveys (CSUR), 51(5), 1-42.

Abstract

In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.

More information

External Link

Research Line 1▪3

8.

[BGG2023]
Benchmarking and survey of explanation methods for black box models
Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, Salvatore Rinzivillo (2023) - Springer Science+Business Media, LLC, part of Springer Nature. In Data Mining and Knowledge Discovery

Abstract

The rise of sophisticated black-box machine learning models in Artificial Intelligence systems has prompted the need for explanation methods that reveal how these models work in an understandable way to users and decision makers. Unsurprisingly, the state-of-the-art exhibits currently a plethora of explainers providing many different types of explanations. With the aim of providing a compass for researchers and practitioners, this paper proposes a categorization of explanation methods from the perspective of the type of explanation they return, also considering the different input data formats. The paper accounts for the most representative explainers to date, also discussing similarities and discrepancies of returned explanations through their visual appearance. A companion website to the paper is provided as a continuous update to new explainers as they appear. Moreover, a subset of the most robust and widely adopted explainers, are benchmarked with respect to a repertoire of quantitative metrics.

More information

External Link

Research Line 1▪3

9.

[MBG2023]
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo, Fosca Giannotti (2023) - Springer Nature. In International Journal of Data Science and Analytics

Abstract

A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.

More information

External Link

Research Line 1▪3

26.

[BRF2022]
immagine
Explaining Black Box with visual exploration of Latent Space
Bodria Francesco, Rinzivillo Salvatore, Fadda Daniele, Guidotti Riccardo, Fosca Giannotti, Pedreschi Dino (2022) - EUROVIS 2022. In Proceedings of the 2022 Conference Eurovis 2022

Abstract

Autoencoders are a powerful yet opaque feature reduction technique, on top of which we propose a novel way for the joint visual exploration of both latent and real space. By interactively exploiting the mapping between latent and real features, it is possible to unveil the meaning of latent features while providing deeper insight into the original variables. To achieve this goal, we exploit and re-adapt existing approaches from eXplainable Artificial Intelligence (XAI) to understand the relationships between the input and latent features. The uncovered relationships between input features and latent ones allow the user to understand the data structure concerning external variables such as the predictions of a classification model. We developed an interactive framework that visually explores the latent space and allows the user to understand the relationships of the input features with model prediction.

27.

[PBF2022]
Co-design of human-centered, explainable AI for clinical decision support
Panigutti Cecilia, Beretta Andrea, Fadda Daniele , Giannotti Fosca, Pedreschi Dino, Perotti Alan, Rinzivillo Salvatore (2022). In ACM Transactions on Interactive Intelligent Systems

Abstract

eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users' trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.

More information

Research Line 1▪3▪4

41.

[MBG2021]
Explainable Deep Image Classifiers for Skin Lesion Diagnosis
Metta Carlo, Beretta Andrea, Guidotti Riccardo, Yin Yuan, Gallinari Patrick, Rinzivillo Salvatore, Giannotti Fosca (2021) - Arxive preprint. In International Journal of Data Science and Analytics

Abstract

A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we analyze a case study on skin lesion images where we customize an existing XAI approach for explaining a deep learning model able to recognize different types of skin lesions. The explanation is formed by synthetic exemplar and counter-exemplar images of skin lesion and offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A survey conducted with domain experts, beginners and unskilled people proof that the usage of explanations increases the trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon could derive from the intrinsic characteristics of each class and, hopefully, can provide support in the resolution of the most frequent misclassifications by human experts.

More information

External Link

Research Line 1▪3▪4

57.

[PPP2020]
immagine
Doctor XAI: an ontology-based approach to black-box sequential data classification explanations
Panigutti Cecilia, Perotti Alan, Pedreschi Dino (2020) - FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. In FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency

Abstract

Several recent advancements in Machine Learning involve blackbox models: algorithms that do not provide human-understandable explanations in support of their decisions. This limitation hampers the fairness, accountability and transparency of these models; the field of eXplainable Artificial Intelligence (XAI) tries to solve this problem providing human-understandable explanations for black-box models. However, healthcare datasets (and the related learning tasks) often present peculiar features, such as sequential data, multi-label predictions, and links to structured background knowledge. In this paper, we introduce Doctor XAI, a model-agnostic explainability technique able to deal with multi-labeled, sequential, ontology-linked data. We focus on explaining Doctor AI, a multilabel classifier which takes as input the clinical history of a patient in order to predict the next visit. Furthermore, we show how exploiting the temporal dimension in the data and the domain knowledge encoded in the medical ontology improves the quality of the mined explanations.

More information

External Link

Research Line 1▪3▪4

63.

[GMP2019]
The AI black box explanation problem
Guidotti Riccardo, Monreale Anna, Pedreschi Dino (2019) - ERCIM News, 116, 12-13. In ERCIM News, 116, 12-13

Abstract

nan

External Link

Research Line 1▪2▪3

Researchers working on this line

Riccardo Guidotti
Riccardo
Guidotti

Assitant Professor

University of Pisa


R. line 1 ▪ 3 ▪ 4 ▪ 5

Salvo Rinzivillo
Salvo
Rinzivillo

Researcher

ISTI - CNR Pisa


R. line 1 ▪ 3 ▪ 4 ▪ 5

Daniele Fadda
Daniele
Fadda

Researcher

ISTI - CNR Pisa


R. line 3

Francesca Naretto
Francesca
Naretto

Post Doctoral Researcher

Scuola Normale


R. line 1 ▪ 3 ▪ 4 ▪ 5

Francesco Bodria
Francesco
Bodria

Phd Student

Scuola Normale


R. line 1 ▪ 3

Carlo Metta
Carlo
Metta

Researcher

ISTI - CNR Pisa


R. line 1 ▪ 2 ▪ 3 ▪4

Eleonora Cappuccio
Eleonora
Cappuccio

Phd Student

University of Pisa - Bari


R. line 3 ▪ 4

Alessio Malizia
Alessio
Malizia

Associate Professor

University of Pisa


R. line 1 ▪ 3 ▪ 4

Fabio Michele Russo
Fabio Michele
Russo

Student

University of Pisa


R. line 3 ▪ 5