a XAI platform for sharing experimental dataset and explanation algorithms
Platform and XUI
The objective of this line is the design and development of a XAI watchdog platform, i.e. an user interface that aims to explain a black-box model by acknowledging different types of explanations for different types of users and providing interactive explanations that the user can navigate. We started the design from the analysis of methods produced in the other research lines and popular approaches from the literature. The objective of the exploration is twofold: (i) identify algorithms and methods to construct the explanations around a bloack box; (ii) build an explanation process where the user can interact with both the black box model and the explanation layer, possibly combining multiple explanation methods with different capabilities. This brings to a platform design consisting of two parts: a software library that integrates a wide set of explanation methods, XAILib; a XUI (eXplainable User Interface) human-computer interface to let the users interact with the explanation layer
XAI-Library
The library has the objective of integrating in a coherent platform explanation algorithms developed within the XAI project or published in the literature. The main architecture of the library distinguishes three data types: tabular data; images data; text data. To have a uniform interface for a blackbox to be explained a dedicated wrapper has been designed that will expose all the functionalities required for classify instances from the model. The objective is to define a high-level grammar to setup an explanable analytical pipeline. By design, the library does not make any assumption on the models to be explained, but it relies on a set of interfaces designed around the most diffuse ML libraries (i.e. SciKit Learn, Keras, Tensorflow, Pytorch). For instance, a predict method is shared among the subclasses of the wrapper to adapt to models coming from any of these libraries. The wrapper is also responsible to apply data transformation to the instances to be classified to have a uniform data layer for all the methods. Different explanation methods generate different explanation formats. Thus, we defined a software interface to encapsulate the different explanation formats, by focusing on a classification of capabilities for each explanation. The functionalities we identified are: feature importance, exemplars, counterexemplars, rules, counterfactual rules. An explanation method can provide one or more of these capabilities, by implementing the corresponding method. The design of the library promotes the extension of the repertoire of methodologies with new ones. The interface allows to integrate existing methods and existing implementation (i.e. external explanation methods) easily, providing only the wrapper implementation. At the time of writing the library has been extended with methods proposed by our research team (LORE [GMG2019], ABELE [GMM2019], LASTS [GMS2020]) and taken from the literature (LIME, SHAP, IntGrad, GradCam, NAM, RISE). The library has been exploited to power a few real-world case studies (detailed in the next section). These analytical cases gave us the possibility to prove the validity of the analytical pipeline of the library and to design suitable visual interfaces to deliver the outcome of the explanation to the final user. At the time of writing, the library has been used to create three interfaces for explanation methods in the healthcare domain.
The Cardiac Risk evaluator
The Cardiac Risk evaluator is a model developed by University of Coimbra for evaluating the probability of death for cardiac reasons in patients admitted to the Emergency Room. We developed a visual interface (to be submitted) to provide local explanations for each classified case. The explanation application exploits the LORE method of the library to provide a set of rules and counterfactual rules to give to the practitioner an explanation of the outcome of the model. A web-based visual interface provides the doctor with an interactive module where the specialist may probe the classification model by means of “what-if” queries and explanations. Besides the explanation capabilities, in collaboration with University of Coimbra, the interface introduces a verification-based approach based on model-testing to compute and visualize the confidence for the prediction, so that the user can better ponder the decision of the algorithm. This verification addresses two aspects: (i) a model-checker exploration of the neighborhood of the instance to discover opposite cases; (ii) a theorem prover to check the compliance of the proposed counter rules with a set of prior knowledge constraints of the case. The interface introduces a novel visual-based widget to explore cases related to the instance to be classified as suggested by the rule and counter-rule. A progressive exploration of the space of possibilities is enabled by a visual timeline that summarizes the path of exploration of the doctor, highlighting the progress of the related cases.
Doctor XAI
Doctor XAI [PPP2020] provides an explanation for the prediction of the next most probable diagnoses for a patient, given his/her recent clinical history. We developed a visual interface that exploits the progressive disclosure of information related to a local instance to be classified and explained. The explanation method relies on LORE and brigns evidence to the practitioners about relevant diagnoses and their temporal evolution. The complexity of this information is presented and modulated through a progressive disclosure mechanism, where not all the information is shown at once, but it is sequenced, with advanced features shown only in secondary views and only at the request of the user. This approach allows also to create separate interfaces with different levels of concepts, for example stopping at the first stages for the patient and giving the possibility to explore further for the medical specialist. Not all users need the same amount of information, and providing all information at once may be overwhelming.
ISIC Explanation with ABELE
In [MGY2021] we built a dedicated interface for an explainer, based on ABELE [GMM2019], for a black-box to classify instances of skin lesions images. The interface is developed to help physicians in the diagnosis of skin cancer. Following the principles of using multiple explanation methods, after classifying an instance, users are presented with two different explanation methods. A counterexample that shows an image classified differently, and a set of exemplar images with the same classification.
—
Research line people
Riccardo Guidotti
Assitant Professor University of Pisa
R.LINE 1 ▪ 3 ▪ 4 ▪ 5
Salvo Rinzivillo
Researcher ISTI - CNR Pisa
R.LINE 1 ▪ 3 ▪ 4 ▪ 5
Daniele Fadda
Researcher ISTI - CNR Pisa
R.LINE 3
Francesca Naretto
Post Doctoral Researcher Scuola Normale
R.LINE 1 ▪ 3 ▪ 4 ▪ 5
Francesco Bodria
Phd Student Scuola Normale
R.LINE 1 ▪ 3
Carlo Metta
Researcher ISTI - CNR Pisa
R.LINE 1 ▪ 2 ▪ 3 ▪4
Eleonora Cappuccio
Phd Student University of Pisa - Bari
R.LINE 3 ▪ 4
Alessio Malizia
Associate Professor University of Pisa
R.LINE 3 ▪ 4
Giorgio Ghelli
Full Professor University of Pisa
R.LINE 3
Line 3 - Publications
2025
Towards Building a Trustworthy RAG-Based Chatbot for the Italian Public Administration
Chandana Sree
Mala, Christian
Maio, Mattia
Proietti, Gizem
Gezici, Fosca
Giannotti, and
3 more authors
Building a Trustworthy Retrieval-Augmented Generation (RAG) chatbot for Italy’s public sector presents challenges that go beyond selecting an appropriate Large Language Model. A major issue is the retrieval phase, where Italian text embedders often underperform compared to English and multilingual counterparts, hindering precise identification and contextualization of critical information. Regulatory constraints further complicate matters by disallowing closed source or cloud based models, forcing reliance on on-premise or fully open source solutions that may not fully address the linguistic complexities of Italian documents. In our study, we evaluate three embedding approaches using a publicly available Italian dataset: a monolingual Italian approach, a translation based method leveraging English only embedders with backward reference mapping, and a multilingual framework applied to both original and translated texts. Our methodology involves chunking documents into coherent segments, embedding them in a high dimensional semantic space, and measuring retrieval accuracy via top-k similarity searches. Our results indicate that the translation based approach significantly improves retrieval performance over Italian specific models, suggesting that bilingual mapping can effectively address both domain specific challenges and regulatory constraints in developing RAG pipelines for public administration.
@inbook{MDP2025,author={Mala, Chandana Sree and di Maio, Christian and Proietti, Mattia and Gezici, Gizem and Giannotti, Fosca and Melacci, Stefano and Lenci, Alessandro and Gori, Marco},booktitle={HHAI 2025},doi={10.3233/faia250637},isbn={9781643686110},issn={1879-8314},line={3,5},month=sep,open_access={Gold},pages={196--204},publisher={IOS Press},title={Towards Building a Trustworthy RAG-Based Chatbot for the Italian Public Administration},visible_on_website={YES},year={2025}}
MAINLE: a Multi-Agent, Interactive, Natural Language Local Explainer of Classification Tasks
Paulo Bruno
Serafim, Romula Ferrer
Filho, STENIO
Freitas, Gizem
Gezici, Fosca
Giannotti, and
2 more authors
@misc{SFF2025,author={Serafim, Paulo Bruno and Filho, Romula Ferrer and Freitas, STENIO and Gezici, Gizem and Giannotti, Fosca and Raimondi, Franco and Santos, Alexandre},line={1,3},month=dec,title={MAINLE: a Multi-Agent, Interactive, Natural Language Local Explainer of Classification Tasks},year={2025}}
2024
An Interactive Interface for Feature Space Navigation
Eleonora
Cappuccio, Isacco
Beretta, Marta
Marchiori Manerba, and Salvatore
Rinzivillo
In this paper, we present Feature Space Navigator, an interactive interface that allows an exploration of the decision boundary of a model. The proposal aims to provide users with an intuitive and direct way to navigate through the feature space, inspect model behavior, and perform what-if analyses via feature manipulations and visual feedback. We integrate multiple views including projections of high-dimensional data, decision boundary surfaces, and sensitivity indicators. The interface also supports real-time adjustments of feature values to observe the corresponding changes in the model predictions. Our experiments show that the system can help both novice and expert users to detect regions of uncertainty, identify influential features, and generate hypotheses for model improvement.
@inbook{CBM2024,author={Cappuccio, Eleonora and Beretta, Isacco and Marchiori Manerba, Marta and Rinzivillo, Salvatore},booktitle={HHAI 2024: Hybrid Human AI Systems for the Social Good},doi={10.3233/faia240184},isbn={9781643685229},issn={1879-8314},line={3},month=jun,open_access={Gold},publisher={IOS Press},title={An Interactive Interface for Feature Space Navigation},visible_on_website={YES},year={2024}}
A Frank System for Co-Evolutionary Hybrid Decision-Making
Federico
Mazzoni, Riccardo
Guidotti, and Alessio
Malizia
Hybrid decision-making systems combine human judgment with algorithmic recommendations, yet coordinating these two sources of information remains challenging. We present FRANK, a co-evolutionary framework enabling humans and AI agents to iteratively exchange feedback and refine decisions over time. FRANK integrates rule-based reasoning, preference modeling, and a learning module that adapts recommendations based on user interaction. Through simulated and real-user experiments, we show that the co-evolution process helps users converge toward more stable and accurate decisions while increasing perceived transparency. The system allows humans to override or modify machine suggestions while the AI agent reshapes its internal models in response to human rationale. FRANK thus promotes a collaborative decision environment where human expertise and machine learning strengthen each other.
@inbook{MBP2024,author={Mazzoni, Federico and Guidotti, Riccardo and Malizia, Alessio},booktitle={Advances in Intelligent Data Analysis XXII},doi={10.1007/978-3-031-58553-1_19},isbn={9783031585531},issn={1611-3349},line={1,3,4},open_access={NO},pages={236–248},publisher={Springer Nature Switzerland},title={A Frank System for Co-Evolutionary Hybrid Decision-Making},visible_on_website={YES},year={2024}}
A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
Luca
Pappalardo, Emanuele
Ferragina, Salvatore
Citraro, Giuliano
Cornacchia, Mirco
Nanni, and
9 more authors
Recommendation systems and assistants (in short, recommenders) are ubiquitous in online platforms and influence most actions of our day-to-day lives, suggesting items or providing solutions based on users’ preferences or requests. This survey analyses the impact of recommenders in four human-AI ecosystems: social media, online retail, urban mapping and generative AI ecosystems. Its scope is to systematise a fast-growing field in which terminologies employed to classify methodologies and outcomes are fragmented and unsystematic. We follow the customary steps of qualitative systematic review, gathering 144 articles from different disciplines to develop a parsimonious taxonomy of: methodologies employed (empirical, simulation, observational, controlled), outcomes observed (concentration, model collapse, diversity, echo chamber, filter bubble, inequality, polarisation, radicalisation, volume), and their level of analysis (individual, item, model, and systemic). We systematically discuss all findings of our survey substantively and methodologically, highlighting also potential avenues for future research. This survey is addressed to scholars and practitioners interested in different human-AI ecosystems, policymakers and institutional stakeholders who want to understand better the measurable outcomes of recommenders, and tech companies who wish to obtain a systematic view of the impact of their recommenders.
@misc{PFC2024,author={Pappalardo, Luca and Ferragina, Emanuele and Citraro, Salvatore and Cornacchia, Giuliano and Nanni, Mirco and Rossetti, Giulio and Gezici, Gizem and Giannotti, Fosca and Lalli, Margherita and Gambetta, Daniele and Mauro, Giovanni and Morini, Virginia and Pansanella, Valentina and Pedreschi, Dino},doi={10.48550/arXiv.2407.01630},line={3,4,5},month=dec,publisher={arXiv},title={A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions},year={2024}}
An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
Benedetta
Muscato, Chandana Sree
Mala, Marta Marchiori
Manerba, Gizem
Gezici, and Fosca
Giannotti
The varied backgrounds and experiences of human annotators inject different opinions and potential biases into the data, inevitably leading to disagreements. Yet, traditional aggregation methods fail to capture individual judgments since they rely on the notion of a single ground truth. Our aim is to review prior contributions to pinpoint the shortcomings that might cause stereotypical content generation. As a preliminary study, our purpose is to investigate state-of-the-art approaches, primarily focusing on the following two research directions. First, we investigate how adding subjectivity aspects to LLMs might guarantee diversity. We then look into the alignment between humans and LLMs and discuss how to measure it. Considering existing gaps, our review explores possible methods to mitigate the perpetuation of biases targeting specific communities. However, we recognize the potential risk of disseminating sensitive information due to the utilization of socio-demographic data in the training process. These considerations underscore the inclusion of diverse perspectives while taking into account the critical importance of implementing robust safeguards to protect individuals’ privacy and prevent the inadvertent propagation of sensitive information.
@misc{MMM2024,address={Torino, Italia},author={Muscato, Benedetta and Mala, Chandana Sree and Manerba, Marta Marchiori and Gezici, Gizem and Giannotti, Fosca},line={3},month=dec,pages={49--55},publisher={ELRA and ICCL},title={An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives},year={2024}}
2023
Benchmarking and survey of explanation methods for black box models
Francesco
Bodria, Fosca
Giannotti, Riccardo
Guidotti, Francesca
Naretto, Dino
Pedreschi, and
1 more author
The rise of sophisticated black-box machine learning models in Artificial Intelligence systems has prompted the need for explanation methods that reveal how these models work in an understandable way to users and decision makers. Unsurprisingly, the state-of-the-art exhibits currently a plethora of explainers providing many different types of explanations. With the aim of providing a compass for researchers and practitioners, this paper proposes a categorization of explanation methods from the perspective of the type of explanation they return, also considering the different input data formats. The paper accounts for the most representative explainers to date, also discussing similarities and discrepancies of returned explanations through their visual appearance. A companion website to the paper is provided as a continuous update to new explainers as they appear. Moreover, a subset of the most robust and widely adopted explainers, are benchmarked with respect to a repertoire of quantitative metrics.
@article{BGG2023,address={Netherlands},author={Bodria, Francesco and Giannotti, Fosca and Guidotti, Riccardo and Naretto, Francesca and Pedreschi, Dino and Rinzivillo, Salvatore},doi={10.1007/s10618-023-00933-9},issn={1573-756X},journal={Data Mining and Knowledge Discovery},line={1,3},month=jun,number={5},open_access={Gold},pages={1719–1778},publisher={Springer Science and Business Media LLC},title={Benchmarking and survey of explanation methods for black box models},visible_on_website={YES},volume={37},year={2023}}
Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
Carlo
Metta, Andrea
Beretta, Riccardo
Guidotti, Yuan
Yin, Patrick
Gallinari, and
2 more authors
International Journal of Data Science and Analytics, Jun 2023
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we aim at improving the trust and confidence of users towards automatic AI decision systems in the field of medical skin lesion diagnosis by customizing an existing XAI approach for explaining an AI model able to recognize different types of skin lesions. The explanation is generated through the use of synthetic exemplar and counter-exemplar images of skin lesions and our contribution offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A validation survey with domain experts, beginners, and unskilled people shows that the use of explanations improves trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon may stem from the intrinsic characteristics of each class and may help resolve common misclassifications made by human experts.
@article{MBG2023,address={Berlin/Heidelberg, Germany},author={Metta, Carlo and Beretta, Andrea and Guidotti, Riccardo and Yin, Yuan and Gallinari, Patrick and Rinzivillo, Salvatore and Giannotti, Fosca},doi={10.1007/s41060-023-00401-z},issn={2364-4168},journal={International Journal of Data Science and Analytics},line={1,3},month=jun,number={1},open_access={Gold},pages={183–195},publisher={Springer Science and Business Media LLC},title={Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning},visible_on_website={YES},volume={20},year={2023}}
Reason to Explain: Interactive Contrastive Explanations (REASONX)
Laura
State, Salvatore
Ruggieri, and Franco
Turini
Many high-performing machine learning models are not interpretable. As they are increasingly used in decision scenarios that can critically affect individuals, it is necessary to develop tools to better understand their outputs. Popular explanation methods include contrastive explanations. However, they suffer several shortcomings, among others an insufficient incorporation of background knowledge, and a lack of interactivity. While (dialogue-like) interactivity is important to better communicate an explanation, background knowledge has the potential to significantly improve their quality, e.g., by adapting the explanation to the needs of the end-user. To close this gap, we present REASONX, an explanation tool based on Constraint Logic Programming (CLP). REASONX provides interactive contrastive explanations that can be augmented by background knowledge, and allows to operate under a setting of under-specified information, leading to increased flexibility in the provided explanations. REASONX computes factual and contrastive decision rules, as well as closest contrastive examples. It provides explanations for decision trees, which can be the ML models under analysis, or global/local surrogate models of any ML model. While the core part of REASONX is built on CLP, we also provide a program layer that allows to compute the explanations via Python, making the tool accessible to a wider audience. We illustrate the capability of REASONX on a synthetic data set, and on a well-developed example in the credit domain. In both cases, we can show how REASONX can be flexibly used and tailored to the needs of the user.
@inbook{SRT2023,author={State, Laura and Ruggieri, Salvatore and Turini, Franco},booktitle={Explainable Artificial Intelligence},doi={10.1007/978-3-031-44064-9_22},isbn={9783031440649},issn={1865-0937},line={1,3},open_access={NO},pages={421–437},publisher={Springer Nature Switzerland},title={Reason to Explain: Interactive Contrastive Explanations (REASONX)},visible_on_website={YES},year={2023}}
EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories
Francesca
Naretto, Roberto
Pellungrini, Salvatore
Rinzivillo, and Daniele
Fadda
Human mobility data play a crucial role in understanding mobility patterns and developing analytical services across domains such as urban planning, transportation, and public health. However, due to the sensitive nature of this data, identifying privacy risks is essential before deciding to release it publicly. Recent work has proposed using machine learning models for predicting privacy risk on raw mobility trajectories and using SHAP for risk explanation. However, applying SHAP to mobility data results in explanations of limited use both for privacy experts and end-users. In this work, we present EXPHLOT, a novel version of the Expert privacy risk prediction and explanation framework specifically tailored for human mobility data. We leverage state-of-the-art algorithms in time series classification to improve risk prediction while reducing computation time. We also devise an entropy-based mask to efficiently compute SHAP values and develop a module for interactive analysis and visualization of SHAP values over a map, empowering users with an intuitive understanding of privacy risk.
@inbook{NPR2023,author={Naretto, Francesca and Pellungrini, Roberto and Rinzivillo, Salvatore and Fadda, Daniele},booktitle={Discovery Science},doi={10.1007/978-3-031-45275-8_22},isbn={9783031452758},issn={1611-3349},line={1,3,5},open_access={Gold},pages={325–340},publisher={Springer Nature Switzerland},title={EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories},visible_on_website={YES},year={2023}}
Demo: an Interactive Visualization Combining Rule-Based and Feature Importance Explanations
Eleonora
Cappuccio, Daniele
Fadda, Rosa
Lanzilotti, and Salvatore
Rinzivillo
In Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter , Sep 2023
The Human-Computer Interaction (HCI) community has long stressed the need for a more user-centered approach to Explainable Artificial Intelligence (XAI), a research area that aims at defining algorithms and tools to illustrate the predictions of the so-called black-box models. This approach can benefit from the fields of user-interface, user experience, and visual analytics. In this demo, we propose a visual-based tool, "F.I.P.E.R.", that shows interactive explanations combining rules and feature importance.
@inproceedings{CFR2023,author={Cappuccio, Eleonora and Fadda, Daniele and Lanzilotti, Rosa and Rinzivillo, Salvatore},booktitle={Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter},collection={CHItaly 2023},doi={10.1145/3605390.3610811},line={1,2,3},month=sep,open_access={NO},pages={1–4},publisher={ACM},series={CHItaly 2023},title={Demo: an Interactive Visualization Combining Rule-Based and Feature Importance Explanations},visible_on_website={YES},year={2023}}
Co-design of Human-centered, Explainable AI for Clinical Decision Support
Cecilia
Panigutti, Andrea
Beretta, Daniele
Fadda, Fosca
Giannotti, Dino
Pedreschi, and
2 more authors
ACM Transactions on Interactive Intelligent Systems, Dec 2023
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback with a two-fold outcome: First, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so we can re-design a better, more human-centered explanation interface.
@article{PBF2023,author={Panigutti, Cecilia and Beretta, Andrea and Fadda, Daniele and Giannotti, Fosca and Pedreschi, Dino and Perotti, Alan and Rinzivillo, Salvatore},doi={10.1145/3587271},issn={2160-6463},journal={ACM Transactions on Interactive Intelligent Systems},line={1,3},month=dec,number={4},open_access={Gold},pages={1–35},publisher={Association for Computing Machinery (ACM)},title={Co-design of Human-centered, Explainable AI for Clinical Decision Support},visible_on_website={YES},volume={13},year={2023}}
2022
Explaining Black Box with Visual Exploration of Latent Space
Bodria, Francesco;
Rinzivillo, Salvatore;
Fadda, Daniele;
Guidotti, Riccardo;
Giannotti, and
2 more authors
Autoencoders are a powerful yet opaque feature reduction technique, on top of which we propose a novel way for the joint visual exploration of both latent and real space. By interactively exploiting the mapping between latent and real features, it is possible to unveil the meaning of latent features while providing deeper insight into the original variables. To achieve this goal, we exploit and re-adapt existing approaches from eXplainable Artificial Intelligence (XAI) to understand the relationships between the input and latent features. The uncovered relationships between input features and latent ones allow the user to understand the data structure concerning external variables such as the predictions of a classification model. We developed an interactive framework that visually explores the latent space and allows the user to understand the relationships of the input features with model prediction.
@misc{BRF2022,author={Bodria and Rinzivillo, Francesco; and Fadda, Salvatore; and Guidotti, Daniele; and Giannotti, Riccardo; and Pedreschi, Fosca; and Dino},doi={10.2312/evs.20221098},line={1,3},month=dec,title={Explaining Black Box with Visual Exploration of Latent Space},year={2022}}
User-driven counterfactual generator: a human centered exploration
In this paper, we critically examine the limitations of the techno-solutionist approach to explanations in the context of counterfactual generation, reaffirming interactivity as a core value in the explanation interface between the model and the user.
@misc{BCM2022,author={M, Beretta I; Cappuccio E; Marchiori Manerba},line={1,3},month=dec,title={User-driven counterfactual generator: a human centered exploration},year={2022}}
2020
Doctor XAI: an ontology-based approach to black-box sequential data classification explanations
Cecilia
Panigutti, Alan
Perotti, and Dino
Pedreschi
In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , Jan 2020
Several recent advancements in Machine Learning involve blackbox models: algorithms that do not provide human-understandable explanations in support of their decisions. This limitation hampers the fairness, accountability and transparency of these models; the field of eXplainable Artificial Intelligence (XAI) tries to solve this problem providing human-understandable explanations for black-box models. However, healthcare datasets (and the related learning tasks) often present peculiar features, such as sequential data, multi-label predictions, and links to structured background knowledge. In this paper, we introduce Doctor XAI, a model-agnostic explainability technique able to deal with multi-labeled, sequential, ontology-linked data. We focus on explaining Doctor AI, a multilabel classifier which takes as input the clinical history of a patient in order to predict the next visit. Furthermore, we show how exploiting the temporal dimension in the data and the domain knowledge encoded in the medical ontology improves the quality of the mined explanations.
@inproceedings{PPP2020,author={Panigutti, Cecilia and Perotti, Alan and Pedreschi, Dino},booktitle={Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},collection={FAT* ’20},doi={10.1145/3351095.3372855},line={1,3,4},month=jan,open_access={NO},pages={629–639},publisher={ACM},series={FAT* ’20},title={Doctor XAI: an ontology-based approach to black-box sequential data classification explanations},visible_on_website={YES},year={2020}}
2019
The AI black box explanation problem
Guidotti
Riccardo, Monreale
Anna, and Pedreschi
Dino
The use of machine learning in decision-making has triggered an intense debate about “fair algorithms”. Given that fairness intuitions differ and can led to conflicting technical requirements, there is a pressing need to integrate ethical thinking into research and design of machine learning. We outline a framework showing how this can be done.
@misc{GMP2019,author={Riccardo, Guidotti and Anna, Monreale and Dino, Pedreschi},line={1,2,3},month=dec,publisher={ERCIM – the European Research Consortium for Informatics and Mathematics},title={The AI black box explanation problem},year={2019}}
2018
A Survey of Methods for Explaining Black Box Models
Riccardo
Guidotti, Anna
Monreale, Salvatore
Ruggieri, Franco
Turini, Fosca
Giannotti, and
1 more author
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
@article{GMR2018,author={Guidotti, Riccardo and Monreale, Anna and Ruggieri, Salvatore and Turini, Franco and Giannotti, Fosca and Pedreschi, Dino},doi={10.1145/3236009},issn={1557-7341},journal={ACM Computing Surveys},line={1,3},month=aug,number={5},pages={1–42},publisher={Association for Computing Machinery (ACM)},title={A Survey of Methods for Explaining Black Box Models},visible_on_website={YES},volume={51},year={2018}}