Resources
All the publications and the thesis of the XAI Project
Publications
1.
[GMR2018]Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore, Turini Franco, Giannotti Fosca, Pedreschi Dino (2022) - ACM Computing Surveys. In ACM computing surveys (CSUR), 51(5), 1-42.
Abstract
In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
BibTex
@article{Guidotti_2018, doi = {10.1145/3236009}, url = {https://doi.org/10.1145%2F3236009}, year = 2018, month = {aug}, publisher = {Association for Computing Machinery ({ACM})}, volume = {51}, number = {5}, pages = {1--42}, author = {Riccardo Guidotti and Anna Monreale and Salvatore Ruggieri and Franco Turini and Fosca Giannotti and Dino Pedreschi}, title = {A Survey of Methods for Explaining Black Box Models}, journal = {{ACM} Computing Surveys} }
2.
[GMG2019]
Guidotti Riccardo, Monreale Anna, Giannotti Fosca, Pedreschi Dino, Ruggieri Salvatore, Turini Franco (2021) - IEEE Intelligent Systems. In IEEE Intelligent Systems
Abstract
The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under investigation, generated by a genetic algorithm. Then, it derives from the interpretable classifier an explanation consisting of a decision rule, explaining the factual reasons of the decision, and a set of counterfactuals, suggesting the changes in the instance features that would lead to a different outcome. Experimental results show that the proposed method outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box.
BibTex
@article{Guidotti_2019, doi = {10.1109/mis.2019.2957223}, url = {https://doi.org/10.1109%2Fmis.2019.2957223}, year = 2019, month = {nov}, publisher = {Institute of Electrical and Electronics Engineers ({IEEE})}, volume = {34}, number = {6}, pages = {14--23}, author = {Riccardo Guidotti and Anna Monreale and Fosca Giannotti and Dino Pedreschi and Salvatore Ruggieri and Franco Turini}, title = {Factual and Counterfactual Explanations for Black Box Decision Making}, journal = {{IEEE} Intelligent Systems} }
3.
[SGM2021]
Setzu Mattia, Guidotti Riccardo, Monreale Anna, Turini Franco, Pedreschi Dino, Giannotti Fosca (2021) - Artificial Intelligence. In Artificial Intelligence
Abstract
Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are “black boxes” which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating “local” explanations. We present GLocalX, a “local-first” model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLocalX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLocalX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLocalX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.
BibTex
@article{Setzu_2021, doi = {10.1016/j.artint.2021.103457}, url = {https://doi.org/10.1016%2Fj.artint.2021.103457}, year = 2021, month = {may}, publisher = {Elsevier {BV}}, volume = {294}, pages = {103457}, author = {Mattia Setzu and Riccardo Guidotti and Anna Monreale and Franco Turini and Dino Pedreschi and Fosca Giannotti}, title = {{GLocalX} - From Local to Global Explanations of Black Box {AI} Models}, journal = {Artificial Intelligence} }
4.
[GMR2018a]Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore , Pedreschi Dino, Turini Franco , Giannotti Fosca (2018) - Arxive preprint
Abstract
The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of achine learning components in socially sensitive and safety-critical contexts. Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.
BibTex
BibTex not found
5.
[NMG2022]Francesca Naretto, Anna Monreale, Fosca Giannotti (2022) - Proceedings of the First International Conference on Hybrid Human-Artificial Intelligence. In Frontiers in Artificial Intelligence and Applications
Abstract
nan
BibTex
BibTex not found
Research Line 5
6.
[NPR2022]Ana Rita Nogueira, Andrea Pugnana, Salvatore Ruggieri, Dino Pedreschi, João Gama (2022) - Wires Data Mining and Knowledge Discovery. In Wires Data Mining and Knowledge Discovery
Abstract
Causality is a complex concept, which roots its developments across several fields, such as statistics, economics, epidemiology, computer science, and philosophy. In recent years, the study of causal relationships has become a crucial part of the Artificial Intelligence community, as causality can be a key tool for overcoming some limitations of correlation-based Machine Learning systems. Causality research can generally be divided into two main branches, that is, causal discovery and causal inference. The former focuses on obtaining causal knowledge directly from observational data. The latter aims to estimate the impact deriving from a change of a certain variable over an outcome of interest. This article aims at covering several methodologies that have been developed for both tasks. This survey does not only focus on theoretical aspects. But also provides a practical toolkit for interested researchers and practitioners, including software, datasets, and running examples.
BibTex
@article{Nogueira_2022, doi = {10.1002/widm.1449}, url = {https://doi.org/10.1002%2Fwidm.1449}, year = 2022, month = {jan}, publisher = {Wiley}, volume = {12}, number = {2}, author = {Ana Rita Nogueira and Andrea Pugnana and Salvatore Ruggieri and Dino Pedreschi and Jo{\~{a}}o Gama}, title = {Methods and tools for causal discovery and causal inference}, journal = {{WIREs} Data Mining and Knowledge Discovery} }
8.
[TSS2022]Andreas Theissler, Francesco Spinnato, Udo Schlegel, Riccardo Guidotti (2022) - IEEE Access. In IEEE Access ( Volume: 10)
Abstract
Time series data is increasingly used in a wide range of fields, and it is often relied on in crucial applications and high-stakes decision-making. For instance, sensors generate time series data to recognize different types of anomalies through automatic decision-making systems. Typically, these systems are realized with machine learning models that achieve top-tier performance on time series classification tasks. Unfortunately, the logic behind their prediction is opaque and hard to understand from a human standpoint. Recently, we observed a consistent increase in the development of explanation methods for time series classification justifying the need to structure and review the field. In this work, we (a) present the first extensive literature review on Explainable AI (XAI) for time series classification, (b) categorize the research field through a taxonomy subdividing the methods into time points-based, subsequences-based and instance-based, and (c) identify open research directions regarding the type of explanations and the evaluation of explanations and interpretability.
BibTex
@article{Theissler_2022, doi = {10.1109/access.2022.3207765}, url = {https://doi.org/10.1109%2Faccess.2022.3207765}, year = 2022, publisher = {Institute of Electrical and Electronics Engineers ({IEEE})}, volume = {10}, pages = {100700--100724}, author = {Andreas Theissler and Francesco Spinnato and Udo Schlegel and Riccardo Guidotti}, title = {Explainable {AI} for Time Series Classification: A Review, Taxonomy and Research Directions}, journal = {{IEEE} Access} }
9.
[G2022]Riccardo Guidotti (2022) - Data Mining and Knowledge Discovery. In Data Mining and Knowledge Discovery
Abstract
Interpretable machine learning aims at unveiling the reasons behind predictions returned by uninterpretable classifiers. One of the most valuable types of explanation consists of counterfactuals. A counterfactual explanation reveals what should have been different in an instance to observe a diverse outcome. For instance, a bank customer asks for a loan that is rejected. The counterfactual explanation consists of what should have been different for the customer in order to have the loan accepted. Recently, there has been an explosion of proposals for counterfactual explainers. The aim of this work is to survey the most recent explainers returning counterfactual explanations. We categorize explainers based on the approach adopted to return the counterfactuals, and we label them according to characteristics of the method and properties of the counterfactuals returned. In addition, we visually compare the explanations, and we report quantitative benchmarking assessing minimality, actionability, stability, diversity, discriminative power, and running time. The results make evident that the current state of the art does not provide a counterfactual explainer able to guarantee all these properties simultaneously.
BibTex
@article{Guidotti_2022, doi = {10.1007/s10618-022-00831-6}, url = {https://doi.org/10.1007%2Fs10618-022-00831-6}, year = 2022, month = {apr}, publisher = {Springer Science and Business Media {LLC}}, author = {Riccardo Guidotti}, title = {Counterfactual explanations and how to find them: literature review and benchmarking}, journal = {Data Mining and Knowledge Discovery} }
10.
[MG2022]Marta Marchiori Manerba, Guidotti Riccardo (2022) - Conference on AI, Ethics, and Society (AIES 2022). In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES'22)
Abstract
During each stage of a dataset creation and development process, harmful biases can be accidentally introduced, leading to models that perpetuates marginalization and discrimination of minorities, as the role of the data used during the training is critical. We propose an evaluation framework that investigates the impact on classification and explainability of bias mitigation preprocessing techniques used to assess data imbalances concerning minorities' representativeness and mitigate the skewed distributions discovered. Our evaluation focuses on assessing fairness, explainability and performance metrics. We analyze the behavior of local model-agnostic explainers on the original and mitigated datasets to examine whether the proxy models learned by the explainability techniques to mimic the black-boxes disproportionately rely on sensitive attributes, demonstrating biases rooted in the explainers. We conduct several experiments about known biased datasets to demonstrate our proposal’s novelty and effectiveness for evaluation and bias detection purposes.
BibTex
@inproceedings{Marchiori_Manerba_2022, doi = {10.1145/3514094.3534170}, url = {https://doi.org/10.1145%2F3514094.3534170}, year = 2022, month = {jul}, publisher = {{ACM}}, author = {Marta Marchiori Manerba and Riccardo Guidotti}, title = {Investigating Debiasing Effects on Classification and Explainability}, booktitle = {Proceedings of the 2022 {AAAI}/{ACM} Conference on {AI}, Ethics, and Society} }
Research Line 1▪5
11.
[BRF2022]
Bodria Francesco, Rinzivillo Salvatore, Fadda Daniele, Guidotti Riccardo, Fosca Giannotti, Pedreschi Dino (2022) - EUROVIS 2022. In Proceedings of the 2022 Conference Eurovis 2022
Abstract
Autoencoders are a powerful yet opaque feature reduction technique, on top of which we propose a novel way for the joint visual exploration of both latent and real space. By interactively exploiting the mapping between latent and real features, it is possible to unveil the meaning of latent features while providing deeper insight into the original variables. To achieve this goal, we exploit and re-adapt existing approaches from eXplainable Artificial Intelligence (XAI) to understand the relationships between the input and latent features. The uncovered relationships between input features and latent ones allow the user to understand the data structure concerning external variables such as the predictions of a classification model. We developed an interactive framework that visually explores the latent space and allows the user to understand the relationships of the input features with model prediction.
BibTex
BibTex not found
12.
[PBF2022]Panigutti Cecilia, Beretta Andrea, Fadda Daniele , Giannotti Fosca, Pedreschi Dino, Perotti Alan, Rinzivillo Salvatore (2022). In ACM Transactions on Interactive Intelligent Systems
Abstract
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users' trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.
BibTex
BibTex not found
Research Line 1▪3▪4
13.
[PBP2022]Panigutti Cecilia, Beretta Andrea, Pedreschi Dino, Giannotti Fosca (2022) - 2022 Conference on Human Factors in Computing Systems. In Proceedings of the 2022 Conference on Human Factors in Computing Systems
Abstract
The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems' decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians to investigate the reasons behind its suggestions. In this paper, we present the results of a user study on the impact of advice from a clinical DSS on healthcare providers' judgment in two different cases: the case where the clinical DSS explains its suggestion and the case it does not. We examined the weight of advice, the behavioral intention to use the system, and the perceptions with quantitative and qualitative measures. Our results indicate a more significant impact of advice when an explanation for the DSS decision is provided. Additionally, through the open-ended questions, we provide some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors.
BibTex
@inproceedings{Panigutti_2022, doi = {10.1145/3491102.3502104}, url = {https://doi.org/10.1145%2F3491102.3502104}, year = 2022, month = {apr}, publisher = {{ACM}}, author = {Cecilia Panigutti and Andrea Beretta and Fosca Giannotti and Dino Pedreschi}, title = {Understanding the impact of explanations on advice-taking: a user study for {AI}-based clinical Decision Support Systems}, booktitle = {{CHI} Conference on Human Factors in Computing Systems} }
Research Line 4
14.
[SMM2022]Setzu Mattia, Monreale Anna, Minervini Pasquale (2022) - Third Conference on Cognitive Machine Intelligence (COGMI) 2021
Abstract
ABSTRACT NOT FOUND
BibTex
@inproceedings{Setzu_2021, doi = {10.1109/cogmi52975.2021.00015}, url = {https://doi.org/10.1109%2Fcogmi52975.2021.00015}, year = 2021, month = {dec}, publisher = {{IEEE}}, author = {Mattia Setzu and Anna Monreale and Pasquale Minervini}, title = {{TRIPLEx}: Triple Extraction for Explanation}, booktitle = {2021 {IEEE} Third International Conference on Cognitive Machine Intelligence ({CogMI})} }
15.
[VMG2022]
Voukelatou Vasiliki, Miliou Ioanna, Giannotti Fosca, Pappalardo Luca (2022) - EPJ Data Science. In EPJ Data Science
Abstract
Peace is a principal dimension of well-being and is the way out of inequity and violence. Thus, its measurement has drawn the attention of researchers, policymakers, and peacekeepers. During the last years, novel digital data streams have drastically changed the research in this field. The current study exploits information extracted from a new digital database called Global Data on Events, Location, and Tone (GDELT) to capture peace through the Global Peace Index (GPI). Applying predictive machine learning models, we demonstrate that news media attention from GDELT can be used as a proxy for measuring GPI at a monthly level. Additionally, we use explainable AI techniques to obtain the most important variables that drive the predictions. This analysis highlights each country’s profile and provides explanations for the predictions, and particularly for the errors and the events that drive these errors. We believe that digital data exploited by researchers, policymakers, and peacekeepers, with data science tools as powerful as machine learning, could contribute to maximizing the societal benefits and minimizing the risks to peace.
BibTex
@article{Voukelatou_2022, doi = {10.1140/epjds/s13688-022-00315-z}, url = {https://doi.org/10.1140%2Fepjds%2Fs13688-022-00315-z}, year = 2022, month = {jan}, publisher = {Springer Science and Business Media {LLC}}, volume = {11}, number = {1}, author = {Vasiliki Voukelatou and Ioanna Miliou and Fosca Giannotti and Luca Pappalardo}, title = {Understanding peace through the world news}, journal = {{EPJ} Data Science} }
16.
[CDF2021]Chatila Raja, Dignum Virginia, Fisher Michael, Giannotti Fosca, Morik Katharina, Russell Stuart, Yeung Karen (2022) - Reflections on Artificial Intelligence for Humanity. In Lecture Notes in Computer Science,
Abstract
Modern AI systems have become of widespread use in almost all sectors with a strong impact on our society. However, the very methods on which they rely, based on Machine Learning techniques for processing data to predict outcomes and to make decisions, are opaque, prone to bias and may produce wrong answers. Objective functions optimized in learning systems are not guaranteed to align with the values that motivated their definition. Properties such as transparency, verifiability, explainability, security, technical robustness and safety, are key to build operational governance frameworks, so that to make AI systems justifiably trustworthy and to align their development and use with human rights and values.
BibTex
@incollection{Chatila_2021, doi = {10.1007/978-3-030-69128-8_2}, url = {https://doi.org/10.1007%2F978-3-030-69128-8_2}, year = 2021, publisher = {Springer International Publishing}, pages = {13--39}, author = {Raja Chatila and Virginia Dignum and Michael Fisher and Fosca Giannotti and Katharina Morik and Stuart Russell and Karen Yeung}, title = {Trustworthy {AI}}, booktitle = {Reflections on Artificial Intelligence for Humanity} }
17.
[PMC2022]
Panigutti Cecilia, Monreale Anna, Comandè Giovanni, Pedreschi Dino (2022) - Deep Learning in Biology and Medicine. In Deep Learning in Biology and Medicine
Abstract
Biology, medicine and biochemistry have become data-centric fields for which Deep Learning methods are delivering groundbreaking results. Addressing high impact challenges, Deep Learning in Biology and Medicine provides an accessible and organic collection of Deep Learning essays on bioinformatics and medicine. It caters for a wide readership, ranging from machine learning practitioners and data scientists seeking methodological knowledge to address biomedical applications, to life science specialists in search of a gentle reference for advanced data analytics.With contributions from internationally renowned experts, the book covers foundational methodologies in a wide spectrum of life sciences applications, including electronic health record processing, diagnostic imaging, text processing, as well as omics-data processing. This survey of consolidated problems is complemented by a selection of advanced applications, including cheminformatics and biomedical interaction network analysis. A modern and mindful approach to the use of data-driven methodologies in the life sciences also requires careful consideration of the associated societal, ethical, legal and transparency challenges, which are covered in the concluding chapters of this book.
BibTex
@book{Bacciu_2022, doi = {10.1142/q0322}, url = {https://doi.org/10.1142%2Fq0322}, year = 2022, month = {feb}, publisher = {{WORLD} {SCIENTIFIC} ({EUROPE})}, author = {Davide Bacciu and Paulo J G Lisboa and Alfredo Vellido}, title = {Deep Learning in Biology and Medicine} }
18.
[MGY2021]Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo (2021) - IOS Press. In HHAI2022: Augmenting Human Intellect, S. Schlobach et al. (Eds.)
Abstract
Explainable AI consists in developing models allowing interaction between decision systems and humans by making the decisions understandable. We propose a case study for skin lesion diagnosis showing how it is possible to provide explanations of the decisions of deep neural network trained to label skin lesions.
BibTex
@incollection{Metta_2022, doi = {10.3233/faia220209}, url = {https://doi.org/10.3233%2Ffaia220209}, year = 2022, month = {sep}, publisher = {{IOS} Press}, author = {Carlo Metta and Riccardo Guidotti and Yuan Yin and Patrick Gallinari and Salvatore Rinzivillo}, title = {Exemplars and Counterexemplars Explanations for Skin Lesion Classifiers}, booktitle = {{HHAI}2022: Augmenting Human Intellect} }
19.
[MG2021]Marchiori Manerba Marta, Guidotti Riccardo (2021) - Third Conference on Cognitive Machine Intelligence (COGMI) 2021. In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI)
Abstract
At every stage of a supervised learning process, harmful biases can arise and be inadvertently introduced, ultimately leading to marginalization, discrimination, and abuse towards minorities. This phenomenon becomes particularly impactful in the sensitive real-world context of abusive language detection systems, where non-discrimination is difficult to assess. In addition, given the opaqueness of their internal behavior, the dynamics leading a model to a certain decision are often not clear nor accountable, and significant problems of trust could emerge. A robust value-oriented evaluation of models' fairness is therefore necessary. In this paper, we present FairShades, a model-agnostic approach for auditing the outcomes of abusive language detection systems. Combining explainability and fairness evaluation, FairShades can identify unintended biases and sensitive categories towards which models are most discriminative. This objective is pursued through the auditing of meaningful counterfactuals generated within CheckList framework. We conduct several experiments on BERT-based models to demonstrate our proposal's novelty and effectiveness for unmasking biases.
BibTex
@inproceedings{Manerba_2021, doi = {10.1109/cogmi52975.2021.00014}, url = {https://doi.org/10.1109%2Fcogmi52975.2021.00014}, year = 2021, month = {dec}, publisher = {{IEEE}}, author = {Marta Marchiori Manerba and Riccardo Guidotti}, title = {{FairShades}: Fairness Auditing via Explainability in Abusive Language Detection Systems}, booktitle = {2021 {IEEE} Third International Conference on Cognitive Machine Intelligence ({CogMI})} }
20.
[BGG2021]Bodria Francesco, Giannotti Fosca, Guidotti Riccardo, Naretto Francesca, Pedreschi Dino, Rinzivillo Salvatore (2021)
Abstract
The widespread adoption of black-box models in Artificial Intelligence has enhanced the need for explanation methods to reveal how these obscure models reach specific decisions. Retrieving explanations is fundamental to unveil possible biases and to resolve practical or ethical issues. Nowadays, the literature is full of methods with different explanations. We provide a categorization of explanation methods based on the type of explanation returned. We present the most recent and widely used explainers, and we show a visual comparison among explanations and a quantitative benchmarking.
BibTex
BibTex not found
21.
[GMP2021]
Guidotti Riccardo, Monreale Anna, Pedreschi Dino, Giannotti Fosca (2021) - Explainable AI Within the Digital Transformation and Cyber Physical Systems (pp. 9-31)
Abstract
This book presents Explainable Artificial Intelligence (XAI), which aims at producing explainable models that enable human users to understand and appropriately trust the obtained results. The authors discuss the challenges involved in making machine learning-based AI explainable. Firstly, that the explanations must be adapted to different stakeholders (end-users, policy makers, industries, utilities etc.) with different levels of technical knowledge (managers, engineers, technicians, etc.) in different application domains. Secondly, that it is important to develop an evaluation framework and standards in order to measure the effectiveness of the provided explanations at the human and the technical levels. This book gathers research contributions aiming at the development and/or the use of XAI techniques in order to address the aforementioned challenges in different applications such as healthcare, finance, cybersecurity, and document summarization. It allows highlighting the benefits and requirements of using explainable models in different application domains in order to provide guidance to readers to select the most adapted models to their specified problem and conditions. Includes recent developments of the use of Explainable Artificial Intelligence (XAI) in order to address the challenges of digital transition and cyber-physical systems; Provides a textual scientific description of the use of XAI in order to address the challenges of digital transition and cyber-physical systems; Presents examples and case studies in order to increase transparency and understanding of the methodological concepts.
BibTex
@book{2021, doi = {10.1007/978-3-030-76409-8}, url = {https://doi.org/10.1007%2F978-3-030-76409-8}, year = 2021, publisher = {Springer International Publishing}, editor = {Moamar Sayed-Mouchaweh}, title = {Explainable {AI} Within the Digital Transformation and Cyber Physical Systems} }
22.
[GD2021]Guidotti Riccardo, D’Onofrio Matteo (2021) - Frontiers in Artificial Intelligence
Abstract
Time series classification (TSC) is a pervasive and transversal problem in various fields ranging from disease diagnosis to anomaly detection in finance. Unfortunately, the most effective models used by Artificial Intelligence (AI) systems for TSC are not interpretable and hide the logic of the decision process, making them unusable in sensitive domains. Recent research is focusing on explanation methods to pair with the obscure classifier to recover this weakness. However, a TSC approach that is transparent by design and is simultaneously efficient and effective is even more preferable. To this aim, we propose an interpretable TSC method based on the patterns, which is possible to extract from the Matrix Profile (MP) of the time series in the training set. A smart design of the classification procedure allows obtaining an efficient and effective transparent classifier modeled as a decision tree that expresses the reasons for the classification as the presence of discriminative subsequences. Quantitative and qualitative experimentation shows that the proposed method overcomes the state-of-the-art interpretable approaches.
BibTex
@article{Guidotti_2021, doi = {10.3389/frai.2021.699448}, url = {https://doi.org/10.3389%2Ffrai.2021.699448}, year = 2021, month = {oct}, publisher = {Frontiers Media {SA}}, volume = {4}, author = {Riccardo Guidotti and Matteo D'Onofrio}, title = {Matrix Profile-Based Interpretable Time Series Classifier}, journal = {Frontiers in Artificial Intelligence} }
24.
[GM2021]Guidotti Riccardo, Monreale Anna (2021) - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
Abstract
Time series shapelets are discriminatory subsequences which are representative of a class, and their similarity to a time series can be used for successfully tackling the time series classification problem. The literature shows that Artificial Intelligence (AI) systems adopting classification models based on time series shapelets can be interpretable, more accurate, and significantly fast. Thus, in order to design a data-agnostic and interpretable classification approach, in this paper we first extend the notion of shapelets to different types of data, i.e., images, tabular and textual data. Then, based on this extended notion of shapelets we propose an interpretable data-agnostic classification method. Since the shapelets discovery can be time consuming, especially for data types more complex than time series, we exploit a notion of prototypes for finding candidate shapelets, and reducing both the time required to find a solution and the variance of shapelets. A wide experimentation on datasets of different types shows that the data-agnostic prototype-based shapelets returned by the proposed method empower an interpretable classification which is also fast, accurate, and stable. In addition, we show and we prove that shapelets can be at the basis of explainable AI methods.
BibTex
@inproceedings{Guidotti_2021, doi = {10.1145/3461702.3462553}, url = {https://doi.org/10.1145%2F3461702.3462553}, year = 2021, month = {jul}, publisher = {{ACM}}, author = {Riccardo Guidotti and Anna Monreale}, title = {Designing Shapelets for Interpretable Data-Agnostic Classification}, booktitle = {Proceedings of the 2021 {AAAI}/{ACM} Conference on {AI}, Ethics, and Society} }
25.
[MGY2021]Metta Carlo, Guidotti Riccardo, Yin Yuan, Gallinari Patrick, Rinzivillo Salvatore (2021) - 2021 IEEE Symposium on Computers and Communications (ISCC). In 2021 IEEE Symposium on Computers and Communications (ISCC)
Abstract
Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans by making the decisions of the formers understandable. This is particularly important in sensitive contexts like in the medical domain. We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations on the decisions of a state of the art deep neural network classifier trained to characterize skin lesions from examples. Our framework consists of a trained classifier onto which an explanation module operates. The latter is able to offer the practitioner exemplars and counterexemplars for the classification diagnosis thus allowing the physician to interact with the automatic diagnosis system. The exemplars are generated via an adversarial autoencoder. We illustrate the behavior of the system on representative examples.
BibTex
@inproceedings{Metta_2021, doi = {10.1109/iscc53001.2021.9631485}, url = {https://doi.org/10.1109%2Fiscc53001.2021.9631485}, year = 2021, month = {sep}, publisher = {{IEEE}}, author = {Carlo Metta and Riccardo Guidotti and Yuan Yin and Patrick Gallinari and Salvatore Rinzivillo}, title = {Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling}, booktitle = {2021 {IEEE} Symposium on Computers and Communications ({ISCC})} }
26.
[GR2021]Guidotti Riccardo, Ruggieri Salvatore (2021)
Abstract
In eXplainable Artificial Intelligence (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances: minimality, actionability, stability, diversity, plausibility, discriminative power. We propose an ensemble of counterfactual explainers that boosts weak explainers, which provide only a subset of such properties, to a powerful method covering all of them. The ensemble runs weak explainers on a sample of instances and of features, and it combines their results by exploiting a diversity-driven selection function. The method is model-agnostic and, through a wrapping approach based on autoencoders, it is also data-agnostic
BibTex
BibTex not found
27.
[MBG2021]Metta Carlo, Beretta Andrea, Guidotti Riccardo, Yin Yuan, Gallinari Patrick, Rinzivillo Salvatore, Giannotti Fosca (2021) - Arxive preprint. In International Journal of Data Science and Analytics
Abstract
A key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems. Research in eXplainable Artificial Intelligence (XAI) is trying to solve this issue. However, often XAI approaches are only tested on generalist classifier and do not represent realistic problems such as those of medical diagnosis. In this paper, we analyze a case study on skin lesion images where we customize an existing XAI approach for explaining a deep learning model able to recognize different types of skin lesions. The explanation is formed by synthetic exemplar and counter-exemplar images of skin lesion and offers the practitioner a way to highlight the crucial traits responsible for the classification decision. A survey conducted with domain experts, beginners and unskilled people proof that the usage of explanations increases the trust and confidence in the automatic decision system. Also, an analysis of the latent space adopted by the explainer unveils that some of the most frequent skin lesion classes are distinctly separated. This phenomenon could derive from the intrinsic characteristics of each class and, hopefully, can provide support in the resolution of the most frequent misclassifications by human experts.
BibTex
BibTex not found
28.
[BGM2021]Bonsignori Valerio, Guidotti Riccardo, Monreale Anna (2021) - Discovery Science
Abstract
Decision tree classifiers have been proved to be among the most interpretable models due to their intuitive structure that illustrates decision processes in form of logical rules. Unfortunately, more complex tree-based classifiers such as oblique trees and random forests overcome the accuracy of decision trees at the cost of becoming non interpretable. In this paper, we propose a method that takes as input any tree-based classifier and returns a single decision tree able to approximate its behavior. Our proposal merges tree-based classifiers by an intensional and extensional approach and applies a post-hoc explanation strategy. Our experiments shows that the retrieved single decision tree is at least as accurate as the original tree-based model, faithful, and more interpretable.
BibTex
@incollection{Bonsignori_2021, doi = {10.1007/978-3-030-88942-5_27}, url = {https://doi.org/10.1007%2F978-3-030-88942-5_27}, year = 2021, publisher = {Springer International Publishing}, pages = {347--357}, author = {Valerio Bonsignori and Riccardo Guidotti and Anna Monreale}, title = {Deriving a Single Interpretable Model by Merging Tree-Based Classifiers}, booktitle = {Discovery Science} }
29.
[RAB2021]Resta Michele, Monreale Anna, Bacciu Davide (2021) - Entropy. In Entropy
Abstract
The biomedical field is characterized by an ever-increasing production of sequential data, which often come in the form of biosignals capturing the time-evolution of physiological processes, such as blood pressure and brain activity. This has motivated a large body of research dealing with the development of machine learning techniques for the predictive analysis of such biosignals. Unfortunately, in high-stakes decision making, such as clinical diagnosis, the opacity of machine learning models becomes a crucial aspect to be addressed in order to increase the trust and adoption of AI technology. In this paper, we propose a model agnostic explanation method, based on occlusion, that enables the learning of the input’s influence on the model predictions. We specifically target problems involving the predictive analysis of time-series data and the models that are typically used to deal with data of such nature, i.e., recurrent neural networks. Our approach is able to provide two different kinds of explanations: one suitable for technical experts, who need to verify the quality and correctness of machine learning models, and one suited to physicians, who need to understand the rationale underlying the prediction to make aware decisions. A wide experimentation on different physiological data demonstrates the effectiveness of our approach both in classification and regression tasks.
BibTex
@article{Resta_2021, doi = {10.3390/e23081064}, url = {https://doi.org/10.3390%2Fe23081064}, year = 2021, month = {aug}, publisher = {{MDPI} {AG}}, volume = {23}, number = {8}, pages = {1064}, author = {Michele Resta and Anna Monreale and Davide Bacciu}, title = {Occlusion-Based Explanations in Deep Recurrent Models for Biomedical Signals}, journal = {Entropy} }
30.
[PB2021]Panigutti Cecilia, Bosi Emanuele (2021) - Il Diabete Online, Organo ufficiale della Società Italiana di Diabetologia, Medicina traslazionale: Applicazioni cliniche della ricerca di base, Vol. 33, N. 1 2021. In Il Diabete
Abstract
ABSTRACT NOT FOUND
BibTex
@article{2021, doi = {10.30682/ildia2101f}, url = {https://doi.org/10.30682%2Fildia2101f}, year = 2021, publisher = {Bononia University Press}, volume = {33}, number = {1}, title = {Intelligenza artificiale in ambito diabetologico: prospettive, dalla ricerca di base alle applicazioni cliniche}, journal = {il Diabete} }
31.
[PPB2021]
Panigutti Cecilia, Perotti Alan, Panisson André, Bajardi Paolo, Pedreschi Dino (2021) - Information Processing & Management. In Journal of Information Processing and Management
Abstract
Highlights: We present a pipeline to detect and explain potential fairness issues in Clinical DSS. We study and compare different multi-label classification disparity measures. We explore ICD9 bias in MIMIC-IV, an openly available ICU benchmark dataset
BibTex
@article{Panigutti_2021, doi = {10.1016/j.ipm.2021.102657}, url = {https://doi.org/10.1016%2Fj.ipm.2021.102657}, year = 2021, month = {sep}, publisher = {Elsevier {BV}}, volume = {58}, number = {5}, pages = {102657}, author = {Cecilia Panigutti and Alan Perotti and Andr{\'{e}} Panisson and Paolo Bajardi and Dino Pedreschi}, title = {{FairLens}: Auditing black-box clinical decision support systems}, journal = {Information Processing {\&}amp$\mathsemicolon$ Management} }
32.
[NPN2020]Naretto Francesca, Pellungrini Roberto, Nardini Franco Maria, Giannotti Fosca (2021) - ECML PKDD 2020 Workshops. In ECML PKDD 2020 Workshops
Abstract
The analysis of privacy risk for mobility data is a fundamental part of any privacy-aware process based on such data. Mobility data are highly sensitive. Therefore, the correct identification of the privacy risk before releasing the data to the public is of utmost importance. However, existing privacy risk assessment frameworks have high computational complexity. To tackle these issues, some recent work proposed a solution based on classification approaches to predict privacy risk using mobility features extracted from the data. In this paper, we propose an improvement of this approach by applying long short-term memory (LSTM) neural networks to predict the privacy risk directly from original mobility data. We empirically evaluate privacy risk on real data by applying our LSTM-based approach. Results show that our proposed method based on a LSTM network is effective in predicting the privacy risk with results in terms of F1 of up to 0.91. Moreover, to explain the predictions of our model, we employ a state-of-the-art explanation algorithm, Shap. We explore the resulting explanation, showing how it is possible to provide effective predictions while explaining them to the end-user.
BibTex
@incollection{Naretto_2020, doi = {10.1007/978-3-030-65965-3_34}, url = {https://doi.org/10.1007%2F978-3-030-65965-3_34}, year = 2020, publisher = {Springer International Publishing}, pages = {501--516}, author = {Francesca Naretto and Roberto Pellungrini and Franco Maria Nardini and Fosca Giannotti}, title = {Prediction and Explanation of Privacy Risk on Mobility Data with Neural Networks}, booktitle = {{ECML} {PKDD} 2020 Workshops} }
33.
[NPM2020]Naretto Francesca, Pellungrini Roberto, Monreale Anna, Nardini Franco Maria, Musolesi Mirco (2021) - Discovery Science. In Discovery Science Conference
Abstract
Mobility data is a proxy of different social dynamics and its analysis enables a wide range of user services. Unfortunately, mobility data are very sensitive because the sharing of people’s whereabouts may arise serious privacy concerns. Existing frameworks for privacy risk assessment provide tools to identify and measure privacy risks, but they often (i) have high computational complexity; and (ii) are not able to provide users with a justification of the reported risks. In this paper, we propose expert, a new framework for the prediction and explanation of privacy risk on mobility data. We empirically evaluate privacy risk on real data, simulating a privacy attack with a state-of-the-art privacy risk assessment framework. We then extract individual mobility profiles from the data for predicting their risk. We compare the performance of several machine learning algorithms in order to identify the best approach for our task. Finally, we show how it is possible to explain privacy risk prediction on real data, using two algorithms: Shap, a feature importance-based method and Lore, a rule-based method. Overall, expert is able to provide a user with the privacy risk and an explanation of the risk itself. The experiments show excellent performance for the prediction task.
BibTex
@incollection{Naretto_2020, doi = {10.1007/978-3-030-61527-7_27}, url = {https://doi.org/10.1007%2F978-3-030-61527-7_27}, year = 2020, publisher = {Springer International Publishing}, pages = {403--418}, author = {Francesca Naretto and Roberto Pellungrini and Anna Monreale and Franco Maria Nardini and Mirco Musolesi}, title = {Predicting and Explaining Privacy Risk Exposure in Mobility Data}, booktitle = {Discovery Science} }
34.
[SGM2019]Setzu Mattia, Guidotti Riccardo, Monreale Anna, Turini Franco (2021) - Machine Learning and Knowledge Discovery in Databases. In ECML PKDD 2019: Machine Learning and Knowledge Discovery in Databases
Abstract
Artificial Intelligence systems often adopt machine learning models encoding complex algorithms with potentially unknown behavior. As the application of these “black box” models grows, it is our responsibility to understand their inner working and formulate them in human-understandable explanations. To this end, we propose a rule-based model-agnostic explanation method that follows a local-to-global schema: it generalizes a global explanation summarizing the decision logic of a black box starting from the local explanations of single predicted instances. We define a scoring system based on a rule relevance score to extract global explanations from a set of local explanations in the form of decision rules. Experiments on several datasets and black boxes show the stability, and low complexity of the global explanations provided by the proposed solution in comparison with baselines and state-of-the-art global explainers.
BibTex
@incollection{Setzu_2020, doi = {10.1007/978-3-030-43823-4_14}, url = {https://doi.org/10.1007%2F978-3-030-43823-4_14}, year = 2020, publisher = {Springer International Publishing}, pages = {159--171}, author = {Mattia Setzu and Riccardo Guidotti and Anna Monreale and Franco Turini}, title = {Global Explanations with Local Scoring}, booktitle = {Machine Learning and Knowledge Discovery in Databases} }
35.
[GMM2020]Guidotti Riccardo, Monreale Anna, Matwin Stan, Pedreschi Dino (2021) - Proceedings of the AAAI Conference on Artificial Intelligence
Abstract
We present an approach to explain the decisions of black box image classifiers through synthetic exemplar and counter-exemplar learnt in the latent feature space. Our explanation method exploits the latent representations learned through an adversarial autoencoder for generating a synthetic neighborhood of the image for which an explanation is required. A decision tree is trained on a set of images represented in the latent space, and its decision rules are used to generate exemplar images showing how the original image can be modified to stay within its class. Counterfactual rules are used to generate counter-exemplars showing how the original image can “morph” into another class. The explanation also comprehends a saliency map highlighting the areas that contribute to its classification, and areas that push it into another class. A wide and deep experimental evaluation proves that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability, besides providing the most useful and interpretable explanations.
BibTex
@article{Guidotti_2020, doi = {10.1609/aaai.v34i09.7116}, url = {https://doi.org/10.1609%2Faaai.v34i09.7116}, year = 2020, month = {apr}, publisher = {Association for the Advancement of Artificial Intelligence ({AAAI})}, volume = {34}, number = {09}, pages = {13665--13668}, author = {Riccardo Guidotti and Anna Monreale and Stan Matwin and Dino Pedreschi}, title = {Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations}, journal = {Proceedings of the {AAAI} Conference on Artificial Intelligence} }
36.
[GMM2019]Guidotti Riccardo, Monreale Anna, Matwin Stan, Pedreschi Dino (2021) - Machine Learning and Knowledge Discovery in Databases. In Black Box Explanation by Learning Image Exemplars in the Latent Feature Space.
Abstract
We present an approach to explain the decisions of black box models for image classification. While using the black box to label images, our explanation method exploits the latent feature space learned through an adversarial autoencoder. The proposed method first generates exemplar images in the latent feature space and learns a decision tree classifier. Then, it selects and decodes exemplars respecting local decision rules. Finally, it visualizes them in a manner that shows to the user how the exemplars can be modified to either stay within their class, or to become counter-factuals by “morphing” into another class. Since we focus on black box decision systems for image classification, the explanation obtained from the exemplars also provides a saliency map highlighting the areas of the image that contribute to its classification, and areas of the image that push it into another class. We present the results of an experimental evaluation on three datasets and two black box models. Besides providing the most useful and interpretable explanations, we show that the proposed method outperforms existing explainers in terms of fidelity, relevance, coherence, and stability.
BibTex
@incollection{Guidotti_2020, doi = {10.1007/978-3-030-46150-8_12}, url = {https://doi.org/10.1007%2F978-3-030-46150-8_12}, year = 2020, publisher = {Springer International Publishing}, pages = {189--205}, author = {Riccardo Guidotti and Anna Monreale and Stan Matwin and Dino Pedreschi}, title = {Black Box Explanation by Learning Image Exemplars in the Latent Feature Space}, booktitle = {Machine Learning and Knowledge Discovery in Databases} }
37.
[PGG2019]Pedreschi Dino, Giannotti Fosca, Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore, Turini Franco (2021) - Proceedings of the AAAI Conference on Artificial Intelligence. In Proceedings of the AAAI Conference on Artificial Intelligence
Abstract
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We focus on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: (i) the language for expressing explanations in terms of logic rules, with statistical and causal interpretation; (ii) the inference of local explanations for revealing the decision rationale for a specific case, by auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of many local explanations into simple global ones, with algorithms that optimize for quality and comprehensibility. We argue that the local-first approach opens the door to a wide variety of alternative solutions along different dimensions: a variety of data sources (relational, text, images, etc.), a variety of learning problems (multi-label classification, regression, scoring, ranking), a variety of languages for expressing meaningful explanations, a variety of means to audit a black box.
BibTex
@article{Pedreschi_2019, doi = {10.1609/aaai.v33i01.33019780}, url = {https://doi.org/10.1609%2Faaai.v33i01.33019780}, year = 2019, month = {jul}, publisher = {Association for the Advancement of Artificial Intelligence ({AAAI})}, volume = {33}, number = {01}, pages = {9780--9784}, author = {Dino Pedreschi and Fosca Giannotti and Riccardo Guidotti and Anna Monreale and Salvatore Ruggieri and Franco Turini}, title = {Meaningful Explanations of Black Box {AI} Decision Systems}, journal = {Proceedings of the {AAAI} Conference on Artificial Intelligence} }
38.
[G2021]Guidotti Riccardo (2021) - Artificial Intelligence. In Artificial Intelligence, 103428
Abstract
Evaluating local explanation methods is a difficult task due to the lack of a shared and universally accepted definition of explanation. In the literature, one of the most common ways to assess the performance of an explanation method is to measure the fidelity of the explanation with respect to the classification of a black box model adopted by an Artificial Intelligent system for making a decision. However, this kind of evaluation only measures the degree of adherence of the local explainer in reproducing the behavior of the black box classifier with respect to the final decision. Therefore, the explanation provided by the local explainer could be different in the content even though it leads to the same decision of the AI system. In this paper, we propose an approach that allows to measure to which extent the explanations returned by local explanation methods are correct with respect to a synthetic ground truth explanation. Indeed, the proposed methodology enables the generation of synthetic transparent classifiers for which the reason for the decision taken, i.e., a synthetic ground truth explanation, is available by design. Experimental results show how the proposed approach allows to easily evaluate local explanations on the ground truth and to characterize the quality of local explanation methods.
BibTex
@article{Guidotti_2021, doi = {10.1016/j.artint.2020.103428}, url = {https://doi.org/10.1016%2Fj.artint.2020.103428}, year = 2021, month = {feb}, publisher = {Elsevier {BV}}, volume = {291}, pages = {103428}, author = {Riccardo Guidotti}, title = {Evaluating local explanation methods on ground truth}, journal = {Artificial Intelligence} }
39.
[LGR2020]Lampridis Orestis, Guidotti Riccardo, Ruggieri Salvatore (2021) - Discovery Science. In In International Conference on Discovery Science (pp. 357-373). Springer, Cham.
Abstract
We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences – albeit they are synthetically generated. xspells generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. We report experiments on two datasets showing that xspells outperforms the well-known lime method in terms of quality of explanations, fidelity, and usefulness, and that is comparable to it in terms of stability.
BibTex
@incollection{Lampridis_2020, doi = {10.1007/978-3-030-61527-7_24}, url = {https://doi.org/10.1007%2F978-3-030-61527-7_24}, year = 2020, publisher = {Springer International Publishing}, pages = {357--373}, author = {Orestis Lampridis and Riccardo Guidotti and Salvatore Ruggieri}, title = {Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars}, booktitle = {Discovery Science} }
40.
[PGM2019]
Panigutti Cecilia, Guidotti Riccardo, Monreale Anna, Pedreschi Dino (2021) - Precision Health and Medicine. In International Workshop on Health Intelligence (pp. 97-110). Springer, Cham.
Abstract
Today the state-of-the-art performance in classification is achieved by the so-called “black boxes”, i.e. decision-making systems whose internal logic is obscure. Such models could revolutionize the health-care system, however their deployment in real-world diagnosis decision support systems is subject to several risks and limitations due to the lack of transparency. The typical classification problem in health-care requires a multi-label approach since the possible labels are not mutually exclusive, e.g. diagnoses. We propose MARLENA, a model-agnostic method which explains multi-label black box decisions. MARLENA explains an individual decision in three steps. First, it generates a synthetic neighborhood around the instance to be explained using a strategy suitable for multi-label decisions. It then learns a decision tree on such neighborhood and finally derives from it a decision rule that explains the black box decision. Our experiments show that MARLENA performs well in terms of mimicking the black box behavior while gaining at the same time a notable amount of interpretability through compact decision rules, i.e. rules with limited length.
BibTex
@incollection{Panigutti_2019, doi = {10.1007/978-3-030-24409-5_9}, url = {https://doi.org/10.1007%2F978-3-030-24409-5_9}, year = 2019, month = {aug}, publisher = {Springer International Publishing}, pages = {97--110}, author = {Cecilia Panigutti and Riccardo Guidotti and Anna Monreale and Dino Pedreschi}, title = {Explaining Multi-label Black-Box Classifiers for Health Applications}, booktitle = {Precision Health and Medicine} }
41.
[GMC2019]Guidotti Riccardo, Monreale Anna, Cariaggi Leonardo (2021) - Advances in Knowledge Discovery and Data Mining. In In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 55-68). Springer, Cham.
Abstract
Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each proposed approach.
BibTex
@incollection{Guidotti_2019, doi = {10.1007/978-3-030-16148-4_5}, url = {https://doi.org/10.1007%2F978-3-030-16148-4_5}, year = 2019, publisher = {Springer International Publishing}, pages = {55--68}, author = {Riccardo Guidotti and Anna Monreale and Leonardo Cariaggi}, title = {Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers}, booktitle = {Advances in Knowledge Discovery and Data Mining} }
42.
[GMS2020]
Guidotti Riccardo, Monreale Anna, Spinnato Francesco, Pedreschi Dino, Giannotti Fosca (2020) - 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)
Abstract
We present a method to explain the decisions of black box models for time series classification. The explanation consists of factual and counterfactual shapelet-based rules revealing the reasons for the classification, and of a set of exemplars and counter-exemplars highlighting similarities and differences with the time series under analysis. The proposed method first generates exemplar and counter-exemplar time series in the latent feature space and learns a local latent decision tree classifier. Then, it selects and decodes those respecting the decision rules explaining the decision. Finally, it learns on them a shapelet-tree that reveals the parts of the time series that must, and must not, be contained for getting the returned outcome from the black box. A wide experimentation shows that the proposed method provides faithful, meaningful and interpretable explanations.
BibTex
@inproceedings{Guidotti_2020, doi = {10.1109/cogmi50398.2020.00029}, url = {https://doi.org/10.1109%2Fcogmi50398.2020.00029}, year = 2020, month = {oct}, publisher = {{IEEE}}, author = {Riccardo Guidotti and Anna Monreale and Francesco Spinnato and Dino Pedreschi and Fosca Giannotti}, title = {Explaining Any Time Series Classifier}, booktitle = {2020 {IEEE} Second International Conference on Cognitive Machine Intelligence ({CogMI})} }
43.
[PPP2020]
Panigutti Cecilia, Perotti Alan, Pedreschi Dino (2020) - FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. In FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
Abstract
Several recent advancements in Machine Learning involve blackbox models: algorithms that do not provide human-understandable explanations in support of their decisions. This limitation hampers the fairness, accountability and transparency of these models; the field of eXplainable Artificial Intelligence (XAI) tries to solve this problem providing human-understandable explanations for black-box models. However, healthcare datasets (and the related learning tasks) often present peculiar features, such as sequential data, multi-label predictions, and links to structured background knowledge. In this paper, we introduce Doctor XAI, a model-agnostic explainability technique able to deal with multi-labeled, sequential, ontology-linked data. We focus on explaining Doctor AI, a multilabel classifier which takes as input the clinical history of a patient in order to predict the next visit. Furthermore, we show how exploiting the temporal dimension in the data and the domain knowledge encoded in the medical ontology improves the quality of the mined explanations.
BibTex
@inproceedings{Panigutti_2020, doi = {10.1145/3351095.3372855}, url = {https://doi.org/10.1145%2F3351095.3372855}, year = 2020, month = {jan}, publisher = {{ACM}}, author = {Cecilia Panigutti and Alan Perotti and Dino Pedreschi}, title = {Doctor {XAI}}, booktitle = {Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency} }
44.
[RGG2020]Ruggieri Salvatore, Giannotti Fosca, Guidotti Riccardo, Monreale Anna, Pedreschi Dino, Turini Franco (2020). In ANNUARIO DI DIRITTO COMPARATO E DI STUDI LEGISLATIVI
Abstract
The pervasive adoption of Artificial Intelligence (AI) models in the modern information society, requires counterbalancing the growing decision power demanded to AI models with risk assessment methodologies. In this paper, we consider the risk of discriminatory decisions and review approaches for discovering discrimination and for designing fair AI models. We highlight the tight relations between discrimination discovery and explainable AI, with the latter being a more general approach for understanding the behavior of black boxes.
BibTex
BibTex not found
45.
[GM2020]Guidotti Riccardo, Monreale Anna (2020) - 2020 IEEE International Conference on Data Mining (ICDM). In 2020 IEEE International Conference on Data Mining (ICDM)
Abstract
Synthetic data generation has been widely adopted in software testing, data privacy, imbalanced learning, machine learning explanation, etc. In such contexts, it is important to generate data samples located within “local” areas surrounding specific instances. Local synthetic data can help the learning phase of predictive models, and it is fundamental for methods explaining the local behavior of obscure classifiers. The contribution of this paper is twofold. First, we introduce a method based on generative operators allowing the synthetic neighborhood generation by applying specific perturbations on a given input instance. The key factor consists in performing a data transformation that makes applicable to any type of data, i.e., data-agnostic. Second, we design a framework for evaluating the goodness of local synthetic neighborhoods exploiting both supervised and unsupervised methodologies. A deep experimentation shows the effectiveness of the proposed method.
BibTex
@inproceedings{Guidotti_2020, doi = {10.1109/icdm50108.2020.00122}, url = {https://doi.org/10.1109%2Ficdm50108.2020.00122}, year = 2020, month = {nov}, publisher = {{IEEE}}, author = {Riccardo Guidotti and Anna Monreale}, title = {Data-Agnostic Local Neighborhood Generation}, booktitle = {2020 {IEEE} International Conference on Data Mining ({ICDM})} }
46.
[BPP2020]Bodria Francesco, Panisson André , Perotti Alan, Piaggesi Simone (2020) - Discussion Paper
Abstract
nan
BibTex
BibTex not found
Research Line 1
47.
[M2020]Monreale Anna (2020) - DPCE Online, [S.l.], v. 44, n. 3. In DPCE Online, [S.l.], v. 44, n. 3, oct. 2020. ISSN 2037-6677
Abstract
nan
BibTex
BibTex not found
Research Line 5
48.
[GPP2019]
Giannotti Fosca, Pedreschi Dino, Panigutti Cecilia (2019) - Biopolitica, Pandemia e democrazia. Rule of law nella società digitale. In BIOPOLITICA, PANDEMIA E DEMOCRAZIA Rule of law nella società digitale
Abstract
La crisi sanitaria ha trasformato le relazioni tra Stato e cittadini, conducendo a limitazioni temporanee dei diritti fondamentali e facendo emergere conflitti tra le due dimensioni della salute, come diritto della persona e come diritto della comunità, e tra il diritto alla salute e le esigenze del sistema economico. Per far fronte all’emergenza, si è modificato il tradizionale equilibrio tra i poteri dello Stato, in una prospettiva in cui il tempo dell’emergenza sembra proiettarsi ancora a lungo sul futuro. La pandemia ha inoltre potenziato la centralità del digitale, dall’utilizzo di software di intelligenza artificiale per il tracciamento del contagio alla nuova connettività del lavoro remoto, passando per la telemedicina. Le nuove tecnologie svolgono un ruolo di prevenzione e controllo, ma pongono anche delicate questioni costituzionali: come tutelare la privacy individuale di fronte al Panopticon digitale? Come inquadrare lo statuto delle piattaforme digitali, veri e propri poteri tecnologici privati, all’interno dei nostri ordinamenti? La ricerca presentata in questo volume e nei due volumi collegati propone le riflessioni su questi temi di studiosi afferenti a una moltitudine di aree disciplinari: medici, giuristi, ingegneri, esperti di robotica e di IA analizzano gli effetti dell’emergenza sanitaria sulla tenuta del modello democratico occidentale, con l’obiettivo di aprire una riflessione sulle linee guida per la ricostruzione del Paese, oltre la pandemia. In particolare, questo terzo volume affronta gli aspetti legati all’impatto della tecnologia digitale e dell’IA sui processi, sulla scuola e sulla medicina, con una riflessione su temi quali l’organizzazione della giustizia, le responsabilità, le carenze organizzative degli enti.
BibTex
BibTex not found
49.
[GMP2019]Guidotti Riccardo, Monreale Anna, Pedreschi Dino (2019) - ERCIM News, 116, 12-13. In ERCIM News, 116, 12-13
Abstract
nan
BibTex
BibTex not found
Research Line 1▪2▪3
50.
[PGG2018]Pedreschi Dino, Giannotti Fosca, Guidotti Riccardo, Monreale Anna , Pappalardo Luca , Ruggieri Salvatore , Turini Franco (2018) - Arxive preprint
Abstract
Black box systems for automated decision making, often based on machine learning over (big) data, map a user's features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possible biases hidden in the algorithms, due to human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations.
BibTex
BibTex not found
Thesis
-
Mattia Setzu - Opening the Black Box: Empowering Machine Learning Models with Explanations. [Phd Thesis - 2021 - Completed]
-
Panigutti Cecilia - eXplainable AI for trustworthy healthcare applications. [Phd Thesis - 2017 - Completed]
-
Bodria Francesco - Understanding and Exploiting the Latent Space of Machine Learning Models. [Phd Thesis - 2019 - On Going]
-
Francesca Naretto - The relationship between privacy and explanations. [Phd Thesis - 2019 - On Going]
-
Francesco Spinnato - Explanation Methods for Sequential Data Models. [Phd Thesis - 2021 - On Going]
-
Isacco Beretta - Causal Explainable Artificial Intelligence. [Phd Thesis - 2021 - On Going]
-
Giovanni Camarda - Machine Learning Explanation as Human-machine collaboration. [PhD Thesis - 2021 - On Going]
-
Robin Thierrault - Mechanicistic Explanation of NN based prediction. [PhD Thesis - 2021 - On Going]
-
Eleonora Cappuccio - A framework for Explanation User Interfaces. [PhD Thesis - 2021 - On Going]
-
Francesco Spinnato - A Model Agnostic Local Explainer for Time Series Black-Box Classifiers. [Master Thesis - 2020 - Completed]
-
Michele Resta - Increasing the Interpretability of Deep Recurrent Models for Biomedical Signals Analysis. [Master Thesis - 2020 - Completed]
-
Francesco Sabiu - Privacy risk analysis of LIME explanations. [Master Thesis - 2020 - Completed]
-
Andrea Fedele - Explaining Siamese Networks in Few-Shot Learning for Audio Data. [Master Thesis - 2022 - Completed]
-
Luca Corbucci - Semantic enrichment of XAI explanations for healthcare. [Master Thesis - 2021 - Completed]
-
Reza Puarrim - X-Bot: Development of a Model and Data Agnostic Chatbot for Explaining the Decisions of Black Box Classifiers. [Master Thesis - 2021 - Completed]
-
Alessandra Galassi - Explanation of Cardiovascular Risk Model Estimator. [Master Thesis - 2020 - Completed]
-
Andrea dell'Abate - Design and Application of ProtopNET for Audio Data. [Master Thesis - 2022 - On Going]
-
Valeria Messina - An explainability framework for fiscal fraud detection classifier. [Master Thesis - 2022 - On Going]
-
Matteo D'Onofrio - Matrix Profile-based Interpretable Time Series Classifier. [Bachelor Thesis - 2021 - Completed]