2024

  1. Fiper: a Visual-based Explanation Combining Rules and Feature Importance
    Eleonora Cappuccio, Daniele Fadda, Rosa Lanzilotti, and Salvatore Rinzivillo
    arXiv preprint arXiv:2404.16903, Dec 2024
    RESEARCH LINE
  2. Generative Model for Decision Trees
    Riccardo Guidotti, Anna Monreale, Mattia Setzu, and Giulia Volpi
    In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada , Dec 2024
    RESEARCH LINE
  3. FLocalX - Local to Global Fuzzy Explanations for Black Box Classifiers
    Guillermo Fernández, Riccardo Guidotti, Fosca Giannotti, Mattia Setzu, Juan A. Aledo, and 2 more authors
    In Advances in Intelligent Data Analysis XXII - 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24-26, 2024, Proceedings, Part II , Dec 2024
    RESEARCH LINE
  4. A Frank System for Co-Evolutionary Hybrid Decision-Making
    Federico Mazzoni, Riccardo Guidotti, and Alessio Malizia
    In Advances in Intelligent Data Analysis XXII - 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24-26, 2024, Proceedings, Part II , Dec 2024
    RESEARCH LINE
  5. AI, Meet Human: Learning Paradigms for Hybrid Decision Making Systems
    Clara Punzi, Roberto Pellungrini, Mattia Setzu, Fosca Giannotti, and Dino Pedreschi
    CoRR, Dec 2024
    RESEARCH LINE
  6. FairBelief - Assessing Harmful Beliefs in Language Models
    Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, and Debora Nozza
    CoRR, Dec 2024
    RESEARCH LINE
  7. Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    Diagnostics, Dec 2024
    RESEARCH LINE
  8. Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence
    Carlo Metta, Andrea Beretta, Roberto Pellungrini, Salvatore Rinzivillo, and Fosca Giannotti
    Bioengineering, Dec 2024
    RESEARCH LINE
  9. An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
    Benedetta Muscato, Chandana Sree Mala, Marta Marchiori Manerba, Gizem Gezici, and Fosca Giannotti
    In Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024 , May 2024
    RESEARCH LINE

2023

  1. The Importance of Time in Causal Algorithmic Recourse
    Isacco Beretta, and Martina Cinquini
    In World Conference on Explainable Artificial Intelligence , Dec 2023
    RESEARCH LINE
  2. Benchmarking and survey of explanation methods for black box models
    Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and 1 more author
    Data Mining and Knowledge Discovery, Jun 2023
    RESEARCH LINE
  3. Benchmarking and survey of explanation methods for black box models
    Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and 1 more author
    Data Mining and Knowledge Discovery, Jun 2023
    RESEARCH LINE
  4. Handling missing values in local post-hoc explainability
    Martina Cinquini, Fosca Giannotti, Riccardo Guidotti, and Andrea Mattei
    In World Conference on Explainable Artificial Intelligence , Dec 2023
    RESEARCH LINE
  5. EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories
    Francesca Naretto, Roberto Pellungrini, Salvatore Rinzivillo, and Daniele Fadda
    In Discovery Science - 26th International Conference, DS 2023, Porto, Portugal, October 9-11, 2023, Proceedings , Dec 2023
    RESEARCH LINE
  6. Declarative Reasoning on Explanations Using Constraint Logic Programming
    Laura State, Salvatore Ruggieri, and Franco Turini
    CoRR, Dec 2023
    RESEARCH LINE
  7. Modeling Events and Interactions through Temporal Processes – A Survey
    Angelica Liguori, Luciano Caroprese, Marco Minici, Bruno Veloso, Francesco Spinnato, and 3 more authors
    Dec 2023
    RESEARCH LINE
  8. Geolet: An Interpretable Model for Trajectory Classification
    Cristiano Landi, Francesco Spinnato, Riccardo Guidotti, Anna Monreale, and Mirco Nanni
    Dec 2023
    RESEARCH LINE
  9. Exposing Racial Dialect Bias in Abusive Language Detection: Can Explainability Play a Role?
    Marta Marchiori Manerba, and Virginia Morini
    Dec 2023
    RESEARCH LINE
  10. Differentiable Causal Discovery with Smooth Acyclic Orientations
    Riccardo Massidda, Francesco Landolfi, Martina Cinquini, and Davide Bacciu
    In ICML 2023 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators , Dec 2023
    RESEARCH LINE
  11. Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    International Journal of Data Science and Analytics, Jun 2023
    RESEARCH LINE
  12. AUC-based Selective Classification
    Andrea Pugnana, and Salvatore Ruggieri
    In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics , 25–27 apr 2023
    RESEARCH LINE
  13. Text to Time Series Representations: Towards Interpretable Predictive Models
    Mattia Poggioli, Francesco Spinnato, and Riccardo Guidotti
    Dec 2023
    RESEARCH LINE
  14. Topics in Selective Classification
    Andrea Pugnana
    Proceedings of the AAAI Conference on Artificial Intelligence, Sep 2023
    RESEARCH LINE
  15. Understanding Any Time Series Classifier with a Subsequence-based Explainer
    Francesco Spinnato, Riccardo Guidotti, Anna Monreale, Mirco Nanni, Dino Pedreschi, and 1 more author
    ACM Transactions on Knowledge Discovery from Data, Nov 2023
    RESEARCH LINE
  16. Reason to explain: Interactive contrastive explanations (REASONX)
    Laura State, Salvatore Ruggieri, and Franco Turini
    Dec 2023
    RESEARCH LINE
  17. Co-design of Human-centered, Explainable AI for Clinical Decision Support
    Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, and 2 more authors
    ACM Trans. Interact. Intell. Syst., Dec 2023

2022

  1. Interpretable Latent Space to Enable Counterfactual Explanations
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    Dec 2022
    RESEARCH LINE
  2. Transparent Latent Space Counterfactual Explanations for Tabular Data
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    In 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) , Oct 2022
    RESEARCH LINE
  3. Explaining Siamese Networks in Few-Shot Learning for Audio Data
    Andrea Fedele, Riccardo Guidotti, and Dino Pedreschi
    In Discovery Science - 25th International Conference, DS 2022, Montpellier, France, October 10-12, 2022, Proceedings , Dec 2022
    RESEARCH LINE
  4. Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions
    Andreas Theissler, Francesco Spinnato, Udo Schlegel, and Riccardo Guidotti
    IEEE Access, Dec 2022
    RESEARCH LINE
  5. Stable and actionable explanations of black-box models through factual and counterfactual rules
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Francesca Naretto, Franco Turini, and 2 more authors
    Data Mining and Knowledge Discovery, Dec 2022
    RESEARCH LINE
  6. Counterfactual explanations and how to find them: literature review and benchmarking
    Riccardo Guidotti
    Data Mining and Knowledge Discovery, Apr 2022
    RESEARCH LINE
  7. Investigating Debiasing Effects on Classification and Explainability
    Marta Marchiori Manerba, and Riccardo Guidotti
    In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society , Jul 2022
    RESEARCH LINE
  8. Exemplars and Counterexemplars Explanations for Skin Lesion Classifiers
    Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo
    Sep 2022
    RESEARCH LINE
  9. Evaluating the Privacy Exposure of Interpretable Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    In 2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2022
    RESEARCH LINE
  10. Privacy Risk of Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Sep 2022
    RESEARCH LINE
  11. Methods and tools for causal discovery and causal inference
    Ana Rita Nogueira, Andrea Pugnana, Salvatore Ruggieri, Dino Pedreschi, and João Gama
    WIREs Data Mining and Knowledge Discovery, Jan 2022
    RESEARCH LINE
  12. Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems
    Cecilia Panigutti, Andrea Beretta, Fosca Giannotti, and Dino Pedreschi
    In CHI Conference on Human Factors in Computing Systems , Apr 2022
    RESEARCH LINE
  13. Explaining Crash Predictions on Multivariate Time Series Data
    Francesco Spinnato, Riccardo Guidotti, Mirco Nanni, Daniele Maccagnola, Giulia Paciello, and 1 more author
    Dec 2022
    RESEARCH LINE
  14. Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions
    Andreas Theissler, Francesco Spinnato, Udo Schlegel, and Riccardo Guidotti
    IEEE Access, Dec 2022
    RESEARCH LINE
  15. Understanding peace through the world news
    Vasiliki Voukelatou, Ioanna Miliou, Fosca Giannotti, and Luca Pappalardo
    EPJ Data Science, Jan 2022
    RESEARCH LINE
  16. Explaining Black Box with Visual Exploration of Latent Space
    Francesco Bodria, Salvatore Rinzivillo, Daniele Fadda, Riccardo Guidotti, Fosca Giannotti, and 1 more author
    In EuroVis 2022 - Short Papers , Jan 2022

2021

  1. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications
    Dec 2021
    RESEARCH LINE
  2. Intelligenza artificiale in ambito diabetologico: prospettive, dalla ricerca di base alle applicazioni cliniche
    Cecilia Panigutti, and Emanuele Bosi
    2021, Dec 2021
    RESEARCH LINE
  3. Deep Learning in Biology and Medicine
    Davide Bacciu, Paulo J G Lisboa, and Alfredo Vellido
    Jun 2021
    RESEARCH LINE
  4. Deriving a Single Interpretable Model by Merging Tree-Based Classifiers
    Valerio Bonsignori, Riccardo Guidotti, and Anna Monreale
    Dec 2021
    RESEARCH LINE
  5. Trustworthy AI
    Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, and 2 more authors
    Dec 2021
    RESEARCH LINE
  6. Matrix Profile-Based Interpretable Time Series Classifier
    Riccardo Guidotti, and Matteo D’Onofrio
    Frontiers in Artificial Intelligence, Oct 2021
    RESEARCH LINE
  7. Designing Shapelets for Interpretable Data-Agnostic Classification
    Riccardo Guidotti, and Anna Monreale
    In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society , Jul 2021
    RESEARCH LINE
  8. Evaluating local explanation methods on ground truth
    Riccardo Guidotti
    Artificial Intelligence, Feb 2021
    RESEARCH LINE
  9. FairShades: Fairness Auditing via Explainability in Abusive Language Detection Systems
    Marta Marchiori Manerba, and Riccardo Guidotti
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  10. Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling
    Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo
    In 2021 IEEE Symposium on Computers and Communications (ISCC) , Sep 2021
    RESEARCH LINE
  11. FairLens: Auditing black-box clinical decision support systems
    Cecilia Panigutti, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi
    Information Processing & Management, Sep 2021
    RESEARCH LINE
  12. Occlusion-Based Explanations in Deep Recurrent Models for Biomedical Signals
    Michele Resta, Anna Monreale, and Davide Bacciu
    Entropy, Aug 2021
    RESEARCH LINE
  13. TRIPLEx: Triple Extraction for Explanation
    Mattia Setzu, Anna Monreale, and Pasquale Minervini
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  14. GLocalX - From Local to Global Explanations of Black Box AI Models
    Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and 1 more author
    Artificial Intelligence, May 2021
    RESEARCH LINE

2020

  1. Explaining Any Time Series Classifier
    Riccardo Guidotti, Anna Monreale, Francesco Spinnato, Dino Pedreschi, and Fosca Giannotti
    In 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) , Oct 2020
    RESEARCH LINE
  2. Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
    Riccardo Guidotti, Anna Monreale, Stan Matwin, and Dino Pedreschi
    Dec 2020
    RESEARCH LINE
  3. Data-Agnostic Local Neighborhood Generation
    Riccardo Guidotti, and Anna Monreale
    In 2020 IEEE International Conference on Data Mining (ICDM) . More Information can be found here , Nov 2020
    RESEARCH LINE
  4. Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations
    Riccardo Guidotti, Anna Monreale, Stan Matwin, and Dino Pedreschi
    Proceedings of the AAAI Conference on Artificial Intelligence, Apr 2020
    RESEARCH LINE
  5. Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars
    Orestis Lampridis, Riccardo Guidotti, and Salvatore Ruggieri
    Dec 2020
    RESEARCH LINE
  6. Predicting and Explaining Privacy Risk Exposure in Mobility Data
    Francesca Naretto, Roberto Pellungrini, Anna Monreale, Franco Maria Nardini, and Mirco Musolesi
    Dec 2020
    RESEARCH LINE
  7. Prediction and Explanation of Privacy Risk on Mobility Data with Neural Networks
    Francesca Naretto, Roberto Pellungrini, Franco Maria Nardini, and Fosca Giannotti
    Dec 2020
    RESEARCH LINE
  8. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations
    Cecilia Panigutti, Alan Perotti, and Dino Pedreschi
    In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , Jan 2020
    RESEARCH LINE
  9. Global Explanations with Local Scoring
    Mattia Setzu, Riccardo Guidotti, Anna Monreale, and Franco Turini
    Dec 2020
    RESEARCH LINE

2019

  1. Helping Your Docker Images to Spread Based on Explainable Models
    Riccardo Guidotti, Jacopo Soldani, Davide Neri, Antonio Brogi, and Dino Pedreschi
    Dec 2019
    RESEARCH LINE
  2. Factual and Counterfactual Explanations for Black Box Decision Making
    Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and 1 more author
    IEEE Intelligent Systems, Nov 2019
    RESEARCH LINE
  3. Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers
    Riccardo Guidotti, Anna Monreale, and Leonardo Cariaggi
    Dec 2019
    RESEARCH LINE
  4. Explaining Multi-label Black-Box Classifiers for Health Applications
    Cecilia Panigutti, Riccardo Guidotti, Anna Monreale, and Dino Pedreschi
    Aug 2019
    RESEARCH LINE
  5. Meaningful Explanations of Black Box AI Decision Systems
    Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, and 1 more author
    Proceedings of the AAAI Conference on Artificial Intelligence, Jul 2019
    RESEARCH LINE

2018

  1. A Survey of Methods for Explaining Black Box Models
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and 1 more author
    ACM Computing Surveys, Aug 2018
    RESEARCH LINE