This section presents the most significant scientific publications produced within the ERC XAI project. The works listed here have been selected for their strategic importance and foundational impact on the project’s research lines. They represent the main contributions to the development of Explainable Artificial Intelligence (XAI) methodologies and algorithms. The selection covers key progress across our research areas (RA1-RA5), with a particular focus on the impact in critical sectors such as healthcare, finance, and mobility, outlining our vision for a more transparent and trustworthy AI.

2025

  1. A Practical Approach to Causal Inference over Time
    Martina Cinquini, Isacco Beretta, Salvatore Ruggieri, and Isabel Valera
    Proceedings of the AAAI Conference on Artificial Intelligence, Apr 2025
    RESEARCH LINE
  2. Embracing Diversity: A Multi-Perspective Approach with Soft Labels
    Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti, and 1 more author
    Sep 2025
    RESEARCH LINE
  3. SafeGen: safeguarding privacy and fairness through a genetic method
    Martina Cinquini, Marta Marchiori Manerba, Federico Mazzoni, Francesca Pratesi, and Riccardo Guidotti
    Machine Learning, Sep 2025
    RESEARCH LINE
  4. Human-AI coevolution
    Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-László Barabási, and 12 more authors
    Artificial Intelligence, Feb 2025
    RESEARCH LINE
  5. MASCOTS: Model-Agnostic Symbolic COunterfactual Explanations for Time Series
    Dawid Płudowski, Francesco Spinnato, Piotr Wilczyński, Krzysztof Kotowski, Evridiki Vasileia Ntagiou, and 2 more authors
    Sep 2025
    RESEARCH LINE
  6. Mathematical Foundation of Interpretable Equivariant Surrogate Models
    Jacopo Joy Colombini, Filippo Bonchi, Francesco Giannini, Fosca Giannotti, Roberto Pellungrini, and 1 more author
    Oct 2025
    RESEARCH LINE
  7. Balancing Fairness and Interpretability in Clustering with FairParTree
    Cristiano Landi, Alessio Cascione, Marta Marchiori Manerba, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  8. Towards Building a Trustworthy RAG-Based Chatbot for the Italian Public Administration
    Chandana Sree Mala, Christian Maio, Mattia Proietti, Gizem Gezici, Fosca Giannotti, and 3 more authors
    Sep 2025
    RESEARCH LINE
  9. Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems
    Benedetta Muscato, Lucia Passaro, Gizem Gezici, and Fosca Giannotti
    In Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence , Sep 2025
    RESEARCH LINE
  10. An Interpretable Data-Driven Unsupervised Approach for the Prevention of Forgotten Items
    Luca Corbucci, Javier Alejandro Borges Legrottaglie, Francesco Spinnato, Anna Monreale, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  11. Group Explainability Through Local Approximation
    Mattia Setzu, Riccardo Guidotti, Dino Pedreschi, and Fosca Giannotti
    Oct 2025
    RESEARCH LINE
  12. Interpretable Instance-Based Learning Through Pairwise Distance Trees
    Andrea Fedele, Alessio Cascione, Riccardo Guidotti, and Cristiano Landi
    Sep 2025
    RESEARCH LINE
  13. Unsupervised and Interpretable Detection of User Personalities in Online Social Networks
    Alessio Cascione, Laura Pollacci, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  14. Human-AI coevolution
    Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-László Barabási, and 12 more authors
    Artificial Intelligence, Feb 2025
    RESEARCH LINE
  15. Explanations Go Linear: Post-hoc Explainability for Tabular Data with Interpretable Meta-Encoding
    Simone Piaggesi, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    Dec 2025
    RESEARCH LINE
  16. Evaluating the Privacy Exposure of Interpretable Global and Local Explainers.
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Dec 2025
    RESEARCH LINE

2024

  1. The ALTAI checklist as a tool to assess ethical and legal implications for a trustworthy AI development in education
    Andrea Fedele, Clara Punzi, and Stefano Tramacere
    Computer Law & Security Review, Jul 2024
    RESEARCH LINE
  2. Fast, Interpretable, and Deterministic Time Series Classification With a Bag-of-Receptive-Fields
    Francesco Spinnato, Riccardo Guidotti, Anna Monreale, and Mirco Nanni
    IEEE Access, Jul 2024
    RESEARCH LINE
  3. FairBelief - Assessing Harmful Beliefs in Language Models
    Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, and Debora Nozza
    In Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024) , Jul 2024
    RESEARCH LINE
  4. A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation, and Research Challenges
    Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, and Fosca Giannotti
    ACM Computing Surveys, Apr 2024
    RESEARCH LINE
  5. FLocalX - Local to Global Fuzzy Explanations for Black Box Classifiers
    Guillermo Fernandez, Riccardo Guidotti, Fosca Giannotti, Mattia Setzu, Juan A. Aledo, and 2 more authors
    Apr 2024
    RESEARCH LINE
  6. An Interactive Interface for Feature Space Navigation
    Eleonora Cappuccio, Isacco Beretta, Marta Marchiori Manerba, and Salvatore Rinzivillo
    Jun 2024
    RESEARCH LINE
  7. Explainable Authorship Identification in Cultural Heritage Applications
    Mattia Setzu, Silvia Corbara, Anna Monreale, Alejandro Moreo, and Fabrizio Sebastiani
    Journal on Computing and Cultural Heritage, Jun 2024
    RESEARCH LINE
  8. Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence
    Carlo Metta, Andrea Beretta, Roberto Pellungrini, Salvatore Rinzivillo, and Fosca Giannotti
    Bioengineering, Apr 2024
    RESEARCH LINE
  9. Drifting explanations in continual learning
    Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, and Davide Bacciu
    Neurocomputing, Sep 2024
    RESEARCH LINE
  10. Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    Diagnostics, Apr 2024
    RESEARCH LINE
  11. Mapping the landscape of ethical considerations in explainable AI research
    Luca Nannini, Marta Marchiori Manerba, and Isacco Beretta
    Ethics and Information Technology, Jun 2024
    RESEARCH LINE
  12. Commodity-specific triads in the Dutch inter-industry production network
    Marzio Di Vece, Frank P. Pijpers, and Diego Garlaschelli
    Scientific Reports, Feb 2024
    RESEARCH LINE
  13. Exploring Large Language Models Capabilities to Explain Decision Trees
    Paulo Bruno Serafim, Pierluigi Crescenzi, Gizem Gezici, Eleonora Cappuccio, Salvatore Rinzivillo, and 1 more author
    Jun 2024
    RESEARCH LINE
  14. Explaining Siamese networks in few-shot learning
    Andrea Fedele, Riccardo Guidotti, and Dino Pedreschi
    Machine Learning, Apr 2024
    RESEARCH LINE
  15. Counterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent Space
    Simone Piaggesi, Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    IEEE Access, Apr 2024
    RESEARCH LINE
  16. Multi-Perspective Stance Detection
    Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, and Fosca Giannotti
    Dec 2024
    RESEARCH LINE
  17. Beyond Headlines: A Corpus of Femicides News Coverage in Italian Newspapers
    Eleonora Cappuccio, Benedetta Muscato, Laura Pollacci, Marta Marchiori Manerba, Clara Punzi, and 5 more authors
    Dec 2024
    RESEARCH LINE
  18. Requirements of eXplainable AI in Algorithmic Hiring
    A. Beretta, G. Ercoli, A. Ferraro, R. Guidotti, A. Iommi, and 4 more authors
    Dec 2024
    RESEARCH LINE
  19. The ethical impact assessment of selling life insurance to titanic passengers
    Gezici, Gizem; Mannari, Chiara; Orlandi, and  Lorenzo
    Dec 2024
    RESEARCH LINE
  20. XAI in healthcare
    Gezici G.; Metta C; Beretta A.; Pellungrini R.; Rinzivillo S.; Pedreschi D.; Giannotti F.
    Dec 2024
    RESEARCH LINE
  21. An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
    Benedetta Muscato, Chandana Sree Mala, Marta Marchiori Manerba, Gizem Gezici, and Fosca Giannotti
    Dec 2024
    RESEARCH LINE

2023

  1. Topics in Selective Classification
    Andrea Pugnana
    Proceedings of the AAAI Conference on Artificial Intelligence, Jun 2023
    RESEARCH LINE
  2. Benchmarking and survey of explanation methods for black box models
    Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and 1 more author
    Data Mining and Knowledge Discovery, Jun 2023
    RESEARCH LINE
  3. Effects of Route Randomization on Urban Emissions
    Giuliano Cornacchia, Mirco Nanni, Dino Pedreschi, and Luca Pappalardo
    SUMO Conference Proceedings, Jun 2023
    RESEARCH LINE
  4. Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    International Journal of Data Science and Analytics, Jun 2023
    RESEARCH LINE
  5. Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic
    Clara Punzi, Aleksandra Maslennikova, Gizem Gezici, Roberto Pellungrini, and Fosca Giannotti
    Jun 2023
    RESEARCH LINE
  6. Reason to Explain: Interactive Contrastive Explanations (REASONX)
    Laura State, Salvatore Ruggieri, and Franco Turini
    Jun 2023
    RESEARCH LINE
  7. Understanding Any Time Series Classifier with a Subsequence-based Explainer
    Francesco Spinnato, Riccardo Guidotti, Anna Monreale, Mirco Nanni, Dino Pedreschi, and 1 more author
    ACM Transactions on Knowledge Discovery from Data, Nov 2023
    RESEARCH LINE
  8. Declarative Reasoning on Explanations Using Constraint Logic Programming
    Laura State, Salvatore Ruggieri, and Franco Turini
    Nov 2023
    RESEARCH LINE
  9. EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories
    Francesca Naretto, Roberto Pellungrini, Salvatore Rinzivillo, and Daniele Fadda
    Nov 2023
    RESEARCH LINE
  10. Deterministic, quenched, and annealed parameter estimation for heterogeneous network models
    Marzio Di Vece, Diego Garlaschelli, and Tiziano Squartini
    Physical Review E, Nov 2023
    RESEARCH LINE
  11. Co-design of Human-centered, Explainable AI for Clinical Decision Support
    Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, and 2 more authors
    ACM Transactions on Interactive Intelligent Systems, Dec 2023
    RESEARCH LINE

2022

  1. Privacy Risk of Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Sep 2022
    RESEARCH LINE
  2. Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems
    Cecilia Panigutti, Andrea Beretta, Fosca Giannotti, and Dino Pedreschi
    In CHI Conference on Human Factors in Computing Systems , Apr 2022
    RESEARCH LINE
  3. Counterfactual explanations and how to find them: literature review and benchmarking
    Riccardo Guidotti
    Data Mining and Knowledge Discovery, Apr 2022
    RESEARCH LINE
  4. Interpretable Latent Space to Enable Counterfactual Explanations
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    Apr 2022
    RESEARCH LINE
  5. Stable and actionable explanations of black-box models through factual and counterfactual rules
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Francesca Naretto, Franco Turini, and 2 more authors
    Data Mining and Knowledge Discovery, Nov 2022
    RESEARCH LINE
  6. Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions
    Andreas Theissler, Francesco Spinnato, Udo Schlegel, and Riccardo Guidotti
    IEEE Access, Nov 2022
    RESEARCH LINE
  7. Transparent Latent Space Counterfactual Explanations for Tabular Data
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    In 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) , Oct 2022
    RESEARCH LINE
  8. User-driven counterfactual generator: a human centered exploration
    Beretta I; Cappuccio E; Marchiori Manerba M
    Dec 2022
    RESEARCH LINE

2021

  1. GLocalX - From Local to Global Explanations of Black Box AI Models
    Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and 1 more author
    Artificial Intelligence, May 2021
    RESEARCH LINE
  2. Matrix Profile-Based Interpretable Time Series Classifier
    Riccardo Guidotti, and Matteo D’Onofrio
    Frontiers in Artificial Intelligence, Oct 2021
    RESEARCH LINE
  3. Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling
    Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo
    In 2021 IEEE Symposium on Computers and Communications (ISCC) , Sep 2021
    RESEARCH LINE
  4. FairLens: Auditing black-box clinical decision support systems
    Cecilia Panigutti, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi
    Information Processing & Management, Sep 2021
    RESEARCH LINE

2020

  1. Predicting and Explaining Privacy Risk Exposure in Mobility Data
    Francesca Naretto, Roberto Pellungrini, Anna Monreale, Franco Maria Nardini, and Mirco Musolesi
    Sep 2020
    RESEARCH LINE
  2. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations
    Cecilia Panigutti, Alan Perotti, and Dino Pedreschi
    In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , Jan 2020
    RESEARCH LINE
  3. Data-Agnostic Local Neighborhood Generation
    Riccardo Guidotti, and Anna Monreale
    In 2020 IEEE International Conference on Data Mining (ICDM) , Nov 2020
    RESEARCH LINE

2019

  1. Factual and Counterfactual Explanations for Black Box Decision Making
    Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and 1 more author
    IEEE Intelligent Systems, Nov 2019
    RESEARCH LINE