Open Access publications.
Selected publications.

2025

  1. A Practical Approach to Causal Inference over Time
    Martina Cinquini, Isacco Beretta, Salvatore Ruggieri, and Isabel Valera
    Proceedings of the AAAI Conference on Artificial Intelligence, Apr 2025
    RESEARCH LINE
  2. Embracing Diversity: A Multi-Perspective Approach with Soft Labels
    Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti, and 1 more author
    Sep 2025
    RESEARCH LINE
  3. SafeGen: safeguarding privacy and fairness through a genetic method
    Martina Cinquini, Marta Marchiori Manerba, Federico Mazzoni, Francesca Pratesi, and Riccardo Guidotti
    Machine Learning, Sep 2025
    RESEARCH LINE
  4. A Bias Injection Technique to Assess the Resilience of Causal Discovery Methods
    Martina Cinquini, Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi, and Riccardo Guidotti
    IEEE Access, Sep 2025
    RESEARCH LINE
  5. Differentially Private FastSHAP for Federated Learning Model Explainability
    Valerio Bonsignori, Luca Corbucci, Francesca Naretto, and Anna Monreale
    In 2025 International Joint Conference on Neural Networks (IJCNN) , Jun 2025
    RESEARCH LINE
  6. Counterfactual Situation Testing: From Single to Multidimensional Discrimination
    Jose M. Alvarez, and Salvatore Ruggieri
    Journal of Artificial Intelligence Research, Apr 2025
    RESEARCH LINE
  7. Balancing Fairness and Interpretability in Clustering with FairParTree
    Cristiano Landi, Alessio Cascione, Marta Marchiori Manerba, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  8. Human-AI coevolution
    Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-László Barabási, and 12 more authors
    Artificial Intelligence, Feb 2025
    RESEARCH LINE
  9. MASCOTS: Model-Agnostic Symbolic COunterfactual Explanations for Time Series
    Dawid Płudowski, Francesco Spinnato, Piotr Wilczyński, Krzysztof Kotowski, Evridiki Vasileia Ntagiou, and 2 more authors
    Sep 2025
    RESEARCH LINE
  10. Mathematical Foundation of Interpretable Equivariant Surrogate Models
    Jacopo Joy Colombini, Filippo Bonchi, Francesco Giannini, Fosca Giannotti, Roberto Pellungrini, and 1 more author
    Oct 2025
    RESEARCH LINE
  11. Explainable AI in Time-Sensitive Scenarios: Prefetched Offline Explanation Model
    Fabio Michele Russo, Carlo Metta, Anna Monreale, Salvatore Rinzivillo, and Fabio Pinelli
    Oct 2025
    RESEARCH LINE
  12. Balancing Fairness and Interpretability in Clustering with FairParTree
    Cristiano Landi, Alessio Cascione, Marta Marchiori Manerba, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  13. Towards Building a Trustworthy RAG-Based Chatbot for the Italian Public Administration
    Chandana Sree Mala, Christian Maio, Mattia Proietti, Gizem Gezici, Fosca Giannotti, and 3 more authors
    Sep 2025
    RESEARCH LINE
  14. Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems
    Benedetta Muscato, Lucia Passaro, Gizem Gezici, and Fosca Giannotti
    In Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence , Sep 2025
    RESEARCH LINE
  15. Categorical Explaining Functors: Ensuring Coherence in Logical Explanations
    Stefano Fioravanti, Francesco Giannini, Pietro Barbiero, Paolo Frazzetto, Roberto Confalonieri, and 2 more authors
    In Proceedings of the TwentySecond International Conference on Principles of Knowledge Representation and Reasoning , Nov 2025
    RESEARCH LINE
  16. An Interpretable Data-Driven Unsupervised Approach for the Prevention of Forgotten Items
    Luca Corbucci, Javier Alejandro Borges Legrottaglie, Francesco Spinnato, Anna Monreale, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  17. Group Explainability Through Local Approximation
    Mattia Setzu, Riccardo Guidotti, Dino Pedreschi, and Fosca Giannotti
    Oct 2025
    RESEARCH LINE
  18. Ensemble Counterfactual Explanations for Churn Analysis
    Samuele Tonati, Marzio Di Vece, Roberto Pellungrini, and Fosca Giannotti
    Oct 2025
    RESEARCH LINE
  19. The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
    Laura State, Alejandra Bringas Colmenarejo, Andrea Beretta, Salvatore Ruggieri, Franco Turini, and 1 more author
    Artificial Intelligence and Law, Jan 2025
    RESEARCH LINE
  20. Interpretable Instance-Based Learning Through Pairwise Distance Trees
    Andrea Fedele, Alessio Cascione, Riccardo Guidotti, and Cristiano Landi
    Sep 2025
    RESEARCH LINE
  21. Unsupervised and Interpretable Detection of User Personalities in Online Social Networks
    Alessio Cascione, Laura Pollacci, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  22. Human-AI coevolution
    Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-László Barabási, and 12 more authors
    Artificial Intelligence, Feb 2025
    RESEARCH LINE
  23. Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts
    Andrea Pugnana, Riccardo Massidda, Francesco Giannini, Pietro Barbiero, Mateo Espinosa Zarlenga, and 4 more authors
    Dec 2025
    RESEARCH LINE
  24. DeepProofLog: Efficient Proving in Deep Stochastic Logic Programs
    Ying Jiao, Rodrigo Castellano Ontiveros, Luc De Raedt, Marco Gori, Francesco Giannini, and 2 more authors
    Dec 2025
    RESEARCH LINE
  25. Disentangled and Self-Explainable Node Representation Learning
    Simone Piaggesi, André Panisson, and Megha Khosla
    Dec 2025
    RESEARCH LINE
  26. FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
    Xenia Heilmann, Luca Corbucci, Mattia Cerrato, and Anna Monreale
    Dec 2025
    RESEARCH LINE
  27. A Simulation Framework for Studying Systemic Effects of Feedback Loops in Recommender Systems
    G. Barlacchi, M. Lalli, E. Ferragina, F. Giannotti, and L. Pappalardo
    Dec 2025
    RESEARCH LINE
  28. A Note on Methods for Explainable Malware Analysis
    Cristiano Landi, Alessio Cascione, Marta Marchiori Manerba, and Riccardo Guidotti
    Dec 2025
    RESEARCH LINE
  29. MAINLE: a Multi-Agent, Interactive, Natural Language Local Explainer of Classification Tasks
    Paulo Bruno Serafim, Romula Ferrer Filho, STENIO Freitas, Gizem Gezici, Fosca Giannotti, and 2 more authors
    Dec 2025
    RESEARCH LINE
  30. Explanations Go Linear: Post-hoc Explainability for Tabular Data with Interpretable Meta-Encoding
    Simone Piaggesi, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    Dec 2025
    RESEARCH LINE
  31. Evaluating the Privacy Exposure of Interpretable Global and Local Explainers.
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Dec 2025
    RESEARCH LINE
  32. "Learning by surprise": a new characterization and mitigation strategy of model collapse in LLM autophagy
    Daniele Gambetta, Gizem Gezici, Fosca Giannotti, Dino Pedreschi, Alistair Knott, and 1 more author
    Dec 2025
    RESEARCH LINE

2024

  1. Enhancing Echo State Networks with Gradient-based Explainability Methods
    Francesco Spinnato, Andrea Cossu, Riccardo Guidotti, Andrea Ceni, Claudio Gallicchio, and 1 more author
    In ESANN 2024 proceesdings , Dec 2024
    RESEARCH LINE
  2. The ALTAI checklist as a tool to assess ethical and legal implications for a trustworthy AI development in education
    Andrea Fedele, Clara Punzi, and Stefano Tramacere
    Computer Law & Security Review, Jul 2024
    RESEARCH LINE
  3. Fast, Interpretable, and Deterministic Time Series Classification With a Bag-of-Receptive-Fields
    Francesco Spinnato, Riccardo Guidotti, Anna Monreale, and Mirco Nanni
    IEEE Access, Jul 2024
    RESEARCH LINE
  4. One-Shot Clustering for Federated Learning
    Maciej Krzysztof Zuziak, Roberto Pellungrini, and Salvatore Rinzivillo
    In 2024 IEEE International Conference on Big Data (BigData) , Dec 2024
    RESEARCH LINE
  5. Shape-based Methods in Mobility Data Analysis: Effectiveness and Limitations
    Cristiano Landi, and Riccardo Guidotti
    Nov 2024
    RESEARCH LINE
  6. DINE: Dimensional Interpretability of Node Embeddings
    Simone Piaggesi, Megha Khosla, André Panisson, and Avishek Anand
    IEEE Transactions on Knowledge and Data Engineering, Dec 2024
    RESEARCH LINE
  7. FairBelief - Assessing Harmful Beliefs in Language Models
    Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, and Debora Nozza
    In Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024) , Dec 2024
    RESEARCH LINE
  8. A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges
    Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, and Fosca Giannotti
    ACM Computing Surveys, Apr 2024
    RESEARCH LINE
  9. FLocalX - Local to Global Fuzzy Explanations for Black Box Classifiers
    Guillermo Fernandez, Riccardo Guidotti, Fosca Giannotti, Mattia Setzu, Juan A. Aledo, and 2 more authors
    Apr 2024
    RESEARCH LINE
  10. Inference through innovation processes tested in the authorship attribution task
    Giulio Tani Raffaelli, Margherita Lalli, and Francesca Tria
    Communications Physics, Sep 2024
    RESEARCH LINE
  11. An Interactive Interface for Feature Space Navigation
    Eleonora Cappuccio, Isacco Beretta, Marta Marchiori Manerba, and Salvatore Rinzivillo
    Jun 2024
    RESEARCH LINE
  12. Explainable Authorship Identification in Cultural Heritage Applications
    Mattia Setzu, Silvia Corbara, Anna Monreale, Alejandro Moreo, and Fabrizio Sebastiani
    Journal on Computing and Cultural Heritage, Jun 2024
    RESEARCH LINE
  13. Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions
    Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, and 14 more authors
    Information Fusion, Jun 2024
    RESEARCH LINE
  14. Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence
    Carlo Metta, Andrea Beretta, Roberto Pellungrini, Salvatore Rinzivillo, and Fosca Giannotti
    Bioengineering, Apr 2024
    RESEARCH LINE
  15. Data-Agnostic Pivotal Instances Selection for Decision-Making Models
    Alessio Cascione, Mattia Setzu, and Riccardo Guidotti
    Apr 2024
    RESEARCH LINE
  16. Drifting explanations in continual learning
    Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, and Davide Bacciu
    Neurocomputing, Sep 2024
    RESEARCH LINE
  17. Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    Diagnostics, Apr 2024
    RESEARCH LINE
  18. Social Bias Probing: Fairness Benchmarking for Language Models
    Marta Marchiori Manerba, Karolina Stanczak, Riccardo Guidotti, and Isabelle Augenstein
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , Apr 2024
    RESEARCH LINE
  19. Mapping the landscape of ethical considerations in explainable AI research
    Luca Nannini, Marta Marchiori Manerba, and Isacco Beretta
    Ethics and Information Technology, Jun 2024
    RESEARCH LINE
  20. Causality-Aware Local Interpretable Model-Agnostic Explanations
    Martina Cinquini, and Riccardo Guidotti
    Jun 2024
    RESEARCH LINE
  21. Commodity-specific triads in the Dutch inter-industry production network
    Marzio Di Vece, Frank P. Pijpers, and Diego Garlaschelli
    Scientific Reports, Feb 2024
    RESEARCH LINE
  22. A Frank System for Co-Evolutionary Hybrid Decision-Making
    Federico Mazzoni, Riccardo Guidotti, and Alessio Malizia
    Feb 2024
    RESEARCH LINE
  23. Exploring Large Language Models Capabilities to Explain Decision Trees
    Paulo Bruno Serafim, Pierluigi Crescenzi, Gizem Gezici, Eleonora Cappuccio, Salvatore Rinzivillo, and 1 more author
    Jun 2024
    RESEARCH LINE
  24. Explaining Siamese networks in few-shot learning
    Andrea Fedele, Riccardo Guidotti, and Dino Pedreschi
    Machine Learning, Apr 2024
    RESEARCH LINE
  25. Generative Model for Decision Trees
    Riccardo Guidotti, Anna Monreale, Mattia Setzu, and Giulia Volpi
    Proceedings of the AAAI Conference on Artificial Intelligence, Mar 2024
    RESEARCH LINE
  26. GLOR-FLEX: Local to Global Rule-Based EXplanations for Federated Learning
    Rami Haffar, Francesca Naretto, David Sánchez, Anna Monreale, and Josep Domingo-Ferrer
    In 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) , Jun 2024
    RESEARCH LINE
  27. Counterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent Space
    Simone Piaggesi, Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    IEEE Access, Jun 2024
    RESEARCH LINE
  28. Analysis of exposome and genetic variability suggests stress as a major contributor for development of pancreatic ductal adenocarcinoma
    Giulia Peduzzi, Alessio Felici, Roberto Pellungrini, Francesca Giorgolo, Riccardo Farinella, and 7 more authors
    Digestive and Liver Disease, Jun 2024
    RESEARCH LINE
  29. Multi-Perspective Stance Detection
    Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, and Fosca Giannotti
    Dec 2024
    RESEARCH LINE
  30. Beyond Headlines: A Corpus of Femicides News Coverage in Italian Newspapers
    Eleonora Cappuccio, Benedetta Muscato, Laura Pollacci, Marta Marchiori Manerba, Clara Punzi, and 5 more authors
    Dec 2024
    RESEARCH LINE
  31. A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
    Luca Pappalardo, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, and 9 more authors
    Dec 2024
    RESEARCH LINE
  32. Requirements of eXplainable AI in Algorithmic Hiring
    A. Beretta, G. Ercoli, A. Ferraro, R. Guidotti, A. Iommi, and 4 more authors
    Dec 2024
    RESEARCH LINE
  33. The ethical impact assessment of selling life insurance to titanic passengers
    Gezici, Gizem; Mannari, Chiara; Orlandi, and  Lorenzo
    Dec 2024
    RESEARCH LINE
  34. XAI in healthcare
    Gezici G.; Metta C; Beretta A.; Pellungrini R.; Rinzivillo S.; Pedreschi D.; Giannotti F.
    Dec 2024
    RESEARCH LINE
  35. An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
    Benedetta Muscato, Chandana Sree Mala, Marta Marchiori Manerba, Gizem Gezici, and Fosca Giannotti
    Dec 2024
    RESEARCH LINE
  36. Interpretable and Fair Mechanisms for Abstaining Classifiers
    Daphne Lenders, Andrea Pugnana, Roberto Pellungrini, Toon Calders, Dino Pedreschi, and 1 more author
    Dec 2024
    RESEARCH LINE

2023

  1. Topics in Selective Classification
    Andrea Pugnana
    Proceedings of the AAAI Conference on Artificial Intelligence, Jun 2023
    RESEARCH LINE
  2. Benchmarking and survey of explanation methods for black box models
    Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and 1 more author
    Data Mining and Knowledge Discovery, Jun 2023
    RESEARCH LINE
  3. Effects of Route Randomization on Urban Emissions
    Giuliano Cornacchia, Mirco Nanni, Dino Pedreschi, and Luca Pappalardo
    SUMO Conference Proceedings, Jun 2023
    RESEARCH LINE
  4. Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    International Journal of Data Science and Analytics, Jun 2023
    RESEARCH LINE
  5. Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic
    Clara Punzi, Aleksandra Maslennikova, Gizem Gezici, Roberto Pellungrini, and Fosca Giannotti
    Jun 2023
    RESEARCH LINE
  6. Interpretable Data Partitioning Through Tree-Based Clustering Methods
    Riccardo Guidotti, Cristiano Landi, Andrea Beretta, Daniele Fadda, and Mirco Nanni
    Jun 2023
    RESEARCH LINE
  7. Text to Time Series Representations: Towards Interpretable Predictive Models
    Mattia Poggioli, Francesco Spinnato, and Riccardo Guidotti
    Jun 2023
    RESEARCH LINE
  8. Reason to Explain: Interactive Contrastive Explanations (REASONX)
    Laura State, Salvatore Ruggieri, and Franco Turini
    Jun 2023
    RESEARCH LINE
  9. Understanding Any Time Series Classifier with a Subsequence-based Explainer
    Francesco Spinnato, Riccardo Guidotti, Anna Monreale, Mirco Nanni, Dino Pedreschi, and 1 more author
    ACM Transactions on Knowledge Discovery from Data, Nov 2023
    RESEARCH LINE
  10. Declarative Reasoning on Explanations Using Constraint Logic Programming
    Laura State, Salvatore Ruggieri, and Franco Turini
    Nov 2023
    RESEARCH LINE
  11. EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories
    Francesca Naretto, Roberto Pellungrini, Salvatore Rinzivillo, and Daniele Fadda
    Nov 2023
    RESEARCH LINE
  12. The Importance of Time in Causal Algorithmic Recourse
    Isacco Beretta, and Martina Cinquini
    Nov 2023
    RESEARCH LINE
  13. Exposing Racial Dialect Bias in Abusive Language Detection: Can Explainability Play a Role?
    Marta Marchiori Manerba, and Virginia Morini
    Nov 2023
    RESEARCH LINE
  14. Demo: an Interactive Visualization Combining Rule-Based and Feature Importance Explanations
    Eleonora Cappuccio, Daniele Fadda, Rosa Lanzilotti, and Salvatore Rinzivillo
    In Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter , Sep 2023
    RESEARCH LINE
  15. Deterministic, quenched, and annealed parameter estimation for heterogeneous network models
    Marzio Di Vece, Diego Garlaschelli, and Tiziano Squartini
    Physical Review E, Nov 2023
    RESEARCH LINE
  16. Co-design of Human-centered, Explainable AI for Clinical Decision Support
    Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, and 2 more authors
    ACM Transactions on Interactive Intelligent Systems, Dec 2023
    RESEARCH LINE
  17. Handling Missing Values in Local Post-hoc Explainability
    Martina Cinquini, Fosca Giannotti, Riccardo Guidotti, and Andrea Mattei
    Dec 2023
    RESEARCH LINE
  18. Geolet: An Interpretable Model for Trajectory Classification
    Cristiano Landi, Francesco Spinnato, Riccardo Guidotti, Anna Monreale, and Mirco Nanni
    Dec 2023
    RESEARCH LINE
  19. AUC-based Selective Classification
    Pugnana Andrea, and Ruggieri Salvatore
    Dec 2023
    RESEARCH LINE
  20. Position Paper: On the Role of Abductive Reasoning in Semantic Image Segmentation.
    Andrea Rafanelli; Stefania Costantini; Andrea Omicini
    Dec 2023
    RESEARCH LINE
  21. Explain and interpret few-shot learning
    Fedele A.
    Dec 2023
    RESEARCH LINE
  22. Modeling Events and Interactions through Temporal Processes – A Survey
    Liguori Angelica, Caroprese Luciano, Minici Marco, Veloso Bruno, Spinnato Francesco, and 3 more authors
    Dec 2023
    RESEARCH LINE

2022

  1. Privacy Risk of Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Sep 2022
    RESEARCH LINE
  2. Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems
    Cecilia Panigutti, Andrea Beretta, Fosca Giannotti, and Dino Pedreschi
    In CHI Conference on Human Factors in Computing Systems , Apr 2022
    RESEARCH LINE
  3. Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients
    Himanshi Allahabadi, Julia Amann, Isabelle Balot, Andrea Beretta, Charles Binkley, and 52 more authors
    IEEE Transactions on Technology and Society, Dec 2022
    RESEARCH LINE
  4. Counterfactual explanations and how to find them: literature review and benchmarking
    Riccardo Guidotti
    Data Mining and Knowledge Discovery, Apr 2022
    RESEARCH LINE
  5. Interpretable Latent Space to Enable Counterfactual Explanations
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    Apr 2022
    RESEARCH LINE
  6. Explaining Siamese Networks in Few-Shot Learning for Audio Data
    Andrea Fedele, Riccardo Guidotti, and Dino Pedreschi
    Apr 2022
    RESEARCH LINE
  7. Evaluating the Privacy Exposure of Interpretable Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    In 2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2022
    RESEARCH LINE
  8. Explaining Crash Predictions on Multivariate Time Series Data
    Francesco Spinnato, Riccardo Guidotti, Mirco Nanni, Daniele Maccagnola, Giulia Paciello, and 1 more author
    Dec 2022
    RESEARCH LINE
  9. Methods and tools for causal discovery and causal inference
    Ana Rita Nogueira, Andrea Pugnana, Salvatore Ruggieri, Dino Pedreschi, and João Gama
    WIREs Data Mining and Knowledge Discovery, Jan 2022
    RESEARCH LINE
  10. Investigating Debiasing Effects on Classification and Explainability
    Marta Marchiori Manerba, and Riccardo Guidotti
    In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society , Jul 2022
    RESEARCH LINE
  11. Stable and actionable explanations of black-box models through factual and counterfactual rules
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Francesca Naretto, Franco Turini, and 2 more authors
    Data Mining and Knowledge Discovery, Nov 2022
    RESEARCH LINE
  12. Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions
    Andreas Theissler, Francesco Spinnato, Udo Schlegel, and Riccardo Guidotti
    IEEE Access, Nov 2022
    RESEARCH LINE
  13. Transparent Latent Space Counterfactual Explanations for Tabular Data
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    In 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) , Oct 2022
    RESEARCH LINE
  14. Understanding peace through the world news
    Vasiliki Voukelatou, Ioanna Miliou, Fosca Giannotti, and Luca Pappalardo
    EPJ Data Science, Jan 2022
    RESEARCH LINE
  15. Exemplars and Counterexemplars Explanations for Skin Lesion Classifiers
    Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo
    Sep 2022
    RESEARCH LINE
  16. Explaining Black Box with Visual Exploration of Latent Space
    Bodria, Francesco; Rinzivillo, Salvatore; Fadda, Daniele; Guidotti, Riccardo; Giannotti, and 2 more authors
    Dec 2022
    RESEARCH LINE
  17. User-driven counterfactual generator: a human centered exploration
    Beretta I; Cappuccio E; Marchiori Manerba M
    Dec 2022
    RESEARCH LINE

2021

  1. TRIPLEx: Triple Extraction for Explanation
    Mattia Setzu, Anna Monreale, and Pasquale Minervini
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  2. Trustworthy AI
    Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, and 2 more authors
    Dec 2021
    RESEARCH LINE
  3. Intelligenza artificiale in ambito diabetologico: prospettive, dalla ricerca di base alle applicazioni cliniche
    Bosi Emanuele Panigutti Cecilia
    il Diabete, Dec 2021
    RESEARCH LINE
  4. Designing Shapelets for Interpretable Data-Agnostic Classification
    Riccardo Guidotti, and Anna Monreale
    In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society , Jul 2021
    RESEARCH LINE
  5. GLocalX - From Local to Global Explanations of Black Box AI Models
    Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and 1 more author
    Artificial Intelligence, May 2021
    RESEARCH LINE
  6. Matrix Profile-Based Interpretable Time Series Classifier
    Riccardo Guidotti, and Matteo D’Onofrio
    Frontiers in Artificial Intelligence, Oct 2021
    RESEARCH LINE
  7. Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling
    Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo
    In 2021 IEEE Symposium on Computers and Communications (ISCC) , Sep 2021
    RESEARCH LINE
  8. Deriving a Single Interpretable Model by Merging Tree-Based Classifiers
    Valerio Bonsignori, Riccardo Guidotti, and Anna Monreale
    Sep 2021
    RESEARCH LINE
  9. FairLens: Auditing black-box clinical decision support systems
    Cecilia Panigutti, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi
    Information Processing & Management, Sep 2021
    RESEARCH LINE
  10. FairShades: Fairness Auditing via Explainability in Abusive Language Detection Systems
    Marta Marchiori Manerba, and Riccardo Guidotti
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  11. Boosting Synthetic Data Generation with Effective Nonlinear Causal Discovery
    Martina Cinquini, Fosca Giannotti, and Riccardo Guidotti
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  12. Occlusion-Based Explanations in Deep Recurrent Models for Biomedical Signals
    Michele Resta, Anna Monreale, and Davide Bacciu
    Entropy, Aug 2021
    RESEARCH LINE
  13. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications
    Aug 2021
    RESEARCH LINE
  14. Evaluating local explanation methods on ground truth
    Riccardo Guidotti
    Artificial Intelligence, Feb 2021
    RESEARCH LINE
  15. Ensemble of Counterfactual Explainers
    Guidotti Riccardo, and Ruggieri Salvatore
    Dec 2021
    RESEARCH LINE

2020

  1. Global Explanations with Local Scoring
    Mattia Setzu, Riccardo Guidotti, Anna Monreale, and Franco Turini
    Dec 2020
    RESEARCH LINE
  2. Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
    Riccardo Guidotti, Anna Monreale, Stan Matwin, and Dino Pedreschi
    Dec 2020
    RESEARCH LINE
  3. Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars
    Orestis Lampridis, Riccardo Guidotti, and Salvatore Ruggieri
    Dec 2020
    RESEARCH LINE
  4. Prediction and Explanation of Privacy Risk on Mobility Data with Neural Networks
    Francesca Naretto, Roberto Pellungrini, Franco Maria Nardini, and Fosca Giannotti
    Dec 2020
    RESEARCH LINE
  5. Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations
    Riccardo Guidotti, Anna Monreale, Stan Matwin, and Dino Pedreschi
    Proceedings of the AAAI Conference on Artificial Intelligence, Apr 2020
    RESEARCH LINE
  6. Predicting and Explaining Privacy Risk Exposure in Mobility Data
    Francesca Naretto, Roberto Pellungrini, Anna Monreale, Franco Maria Nardini, and Mirco Musolesi
    Apr 2020
    RESEARCH LINE
  7. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations
    Cecilia Panigutti, Alan Perotti, and Dino Pedreschi
    In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , Jan 2020
    RESEARCH LINE
  8. Data-Agnostic Local Neighborhood Generation
    Riccardo Guidotti, and Anna Monreale
    In 2020 IEEE International Conference on Data Mining (ICDM) , Nov 2020
    RESEARCH LINE
  9. Explaining Any Time Series Classifier
    Riccardo Guidotti, Anna Monreale, Francesco Spinnato, Dino Pedreschi, and Fosca Giannotti
    In 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) , Oct 2020
    RESEARCH LINE
  10. Rischi etico-legali dell’Intelligenza Artificiale
    Monreale Anna
    Dec 2020
    RESEARCH LINE
  11. Opening the black box: a primer for anti-discrimination
    Ruggieri Salvatore, Giannotti Fosca, Guidotti Riccardo, Monreale Anna, Pedreschi Dino, and 1 more author
    Dec 2020
    RESEARCH LINE

2019

  1. Helping Your Docker Images to Spread Based on Explainable Models
    Riccardo Guidotti, Jacopo Soldani, Davide Neri, Antonio Brogi, and Dino Pedreschi
    Dec 2019
    RESEARCH LINE
  2. Meaningful Explanations of Black Box AI Decision Systems
    Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, and 1 more author
    Proceedings of the AAAI Conference on Artificial Intelligence, Jul 2019
    RESEARCH LINE
  3. Factual and Counterfactual Explanations for Black Box Decision Making
    Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and 1 more author
    IEEE Intelligent Systems, Nov 2019
    RESEARCH LINE
  4. Explaining Multi-label Black-Box Classifiers for Health Applications
    Cecilia Panigutti, Riccardo Guidotti, Anna Monreale, and Dino Pedreschi
    Aug 2019
    RESEARCH LINE
  5. The AI black box explanation problem
    Guidotti Riccardo, Monreale Anna, and Pedreschi Dino
    Dec 2019
    RESEARCH LINE

2018

  1. A Survey of Methods for Explaining Black Box Models
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and 1 more author
    ACM Computing Surveys, Aug 2018
    RESEARCH LINE
  2. Open the Black Box Data-Driven Explanation of Black Box Decision Systems
    Pedreschi Dino, Giannotti Fosca, Guidotti Riccardo, Monreale Anna, Pappalardo Luca, and 2 more authors
    Dec 2018
    RESEARCH LINE
  3. Local Rule-Based Explanations of Black Box Decision Systems
    Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore, Pedreschi Dino, Turini Franco, and 1 more author
    Dec 2018
    RESEARCH LINE