2024

  1. Generative Model for Decision Trees
    Riccardo Guidotti, Anna Monreale, Mattia Setzu, and Giulia Volpi
    In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada , Dec 2024
    RESEARCH LINE
  2. FLocalX - Local to Global Fuzzy Explanations for Black Box Classifiers
    Guillermo Fernández, Riccardo Guidotti, Fosca Giannotti, Mattia Setzu, Juan A. Aledo, and 2 more authors
    In Advances in Intelligent Data Analysis XXII - 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24-26, 2024, Proceedings, Part II , Dec 2024
    RESEARCH LINE
  3. A Frank System for Co-Evolutionary Hybrid Decision-Making
    Federico Mazzoni, Riccardo Guidotti, and Alessio Malizia
    In Advances in Intelligent Data Analysis XXII - 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24-26, 2024, Proceedings, Part II , Apr 2024
    RESEARCH LINE
  4. AI, Meet Human: Learning Paradigms for Hybrid Decision Making Systems
    Clara Punzi, Roberto Pellungrini, Mattia Setzu, Fosca Giannotti, and Dino Pedreschi
    CoRR, Dec 2024
    RESEARCH LINE
  5. FairBelief - Assessing Harmful Beliefs in Language Models
    Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, and Debora Nozza
    CoRR, Dec 2024
    RESEARCH LINE
  6. Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    Diagnostics, Dec 2024
    RESEARCH LINE
  7. Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence
    Carlo Metta, Andrea Beretta, Roberto Pellungrini, Salvatore Rinzivillo, and Fosca Giannotti
    Bioengineering, Dec 2024
    RESEARCH LINE
  8. An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
    Benedetta Muscato, Chandana Sree Mala, Marta Marchiori Manerba, Gizem Gezici, and Fosca Giannotti
    In Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024 , May 2024
    RESEARCH LINE
  9. Explainable Authorship Identification in Cultural Heritage Applications
    Mattia Setzu, Silvia Corbara, Anna Monreale, Alejandro Moreo, and Fabrizio Sebastiani
    Journal on Computing and Cultural Heritage, Jun 2024
    RESEARCH LINE
  10. Exploring Large Language Models Capabilities to Explain Decision Trees
    Paulo Bruno Serafim, Pierluigi Crescenzi, Gizem Gezici, Eleonora Cappuccio, Salvatore Rinzivillo, and 1 more author
    Jun 2024
    RESEARCH LINE
  11. Analysis of exposome and genetic variability suggests stress as a major contributor for development of pancreatic ductal adenocarcinoma
    Giulia Peduzzi, Alessio Felici, Roberto Pellungrini, Francesca Giorgolo, Riccardo Farinella, and 7 more authors
    Digestive and Liver Disease, Jun 2024
    RESEARCH LINE
  12. Interpretable and Fair Mechanisms for Abstaining Classifiers
    Daphne Lenders, Andrea Pugnana, Roberto Pellungrini, Toon Calders, Dino Pedreschi, and 1 more author
    Jun 2024
    RESEARCH LINE
  13. A Frank System for Co-Evolutionary Hybrid Decision-Making
    Federico Mazzoni, Riccardo Guidotti, and Alessio Malizia
    Jun 2024
  14. Explaining Siamese networks in few-shot learning
    Andrea Fedele, Riccardo Guidotti, and Dino Pedreschi
    Machine Learning, Apr 2024
  15. Drifting explanations in continual learning
    Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, and Davide Bacciu
    Neurocomputing, Apr 2024
    RESEARCH LINE
  16. Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions
    Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, and 14 more authors
    Information Fusion, Apr 2024
    RESEARCH LINE
  17. Social Bias Probing: Fairness Benchmarking for Language Models
    Marta Marchiori Manerba, Karolina Stanczak, Riccardo Guidotti, and Isabelle Augenstein
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , Nov 2024
    RESEARCH LINE
  18. Data-Agnostic Pivotal Instances Selection for Decision-Making Models
    Alessio Cascione, Mattia Setzu, and Riccardo Guidotti
    In Machine Learning and Knowledge Discovery in Databases. Research Track , Nov 2024
    RESEARCH LINE
  19. Causality-Aware Local Interpretable Model-Agnostic Explanations
    Martina Cinquini, and Riccardo Guidotti
    In Explainable Artificial Intelligence , Jul 2024
    RESEARCH LINE
  20. Bridging the Gap in Hybrid Decision-Making Systems
    Federico Mazzoni, Roberto Pellungrini, and Riccardo Guidotti
    Jul 2024
    RESEARCH LINE
  21. An Interactive Interface for Feature Space Navigation
    Eleonora Cappuccio, Isacco Beretta, Marta Marchiori Manerba, and Salvatore Rinzivillo
    In HHAI 2024: Hybrid Human AI Systems for the Social Good - Proceedings of the Third International Conference on Hybrid Human-Artificial Intelligence, Malmö, Sweden, 10-14 June 2024 , Jul 2024
    RESEARCH LINE
  22. A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
    Luca Pappalardo, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, and 9 more authors
    Jun 2024
    RESEARCH LINE
  23. Inference through innovation processes tested in the authorship attribution task
    Giulio Tani Raffaelli, Margherita Lalli, and Francesca Tria
    Communications Physics, Sep 2024
    RESEARCH LINE
  24. DINE: Dimensional Interpretability of Node Embeddings
    Simone Piaggesi, Megha Khosla, André Panisson, and Avishek Anand
    IEEE Transactions on Knowledge and Data Engineering, Dec 2024
    Conference Name: IEEE Transactions on Knowledge and Data Engineering
    RESEARCH LINE
  25. Counterfactual and Prototypical Explanations for Tabular Data via Interpretable Latent Space
    Simone Piaggesi, Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    IEEE Access, Nov 2024
    RESEARCH LINE
  26. Enhancing Echo State Networks with Gradient-based Explainability Methods
    Francesco Spinnato, Andrea Cossu, Riccardo Guidotti, Andrea Ceni, Claudio Gallicchio, and 1 more author
    In ESANN 2024 proceesdings , Nov 2024
  27. Fast, Interpretable, and Deterministic Time Series Classification With a Bag-of-Receptive-Fields
    Francesco Spinnato, Riccardo Guidotti, Anna Monreale, and Mirco Nanni
    IEEE Access, Oct 2024
  28. Mapping the landscape of ethical considerations in explainable AI research
    Luca Nannini, Marta Marchiori Manerba, and Isacco Beretta
    Ethics and Information Technology, Jun 2024
    RESEARCH LINE
  29. Commodity-specific triads in the Dutch inter-industry production network
    Marzio Di Vece, Frank P. Pijpers, and Diego Garlaschelli
    Scientific Reports, Feb 2024
    Publisher: Nature Publishing Group
    RESEARCH LINE

2023

  1. Co-design of Human-centered, Explainable AI for Clinical Decision Support
    Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, and 2 more authors
    ACM Trans. Interact. Intell. Syst., Dec 2023
    RESEARCH LINE
  2. The Importance of Time in Causal Algorithmic Recourse
    Isacco Beretta, and Martina Cinquini
    In World Conference on Explainable Artificial Intelligence , Dec 2023
    RESEARCH LINE
  3. Benchmarking and survey of explanation methods for black box models
    Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and 1 more author
    Data Mining and Knowledge Discovery, Jun 2023
    RESEARCH LINE
  4. Demo: an Interactive Visualization Combining Rule-Based and Feature Importance Explanations
    Eleonora Cappuccio, Daniele Fadda, Rosa Lanzilotti, and Salvatore Rinzivillo
    In Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter , Torino, Italy, Jun 2023
    RESEARCH LINE
  5. Handling missing values in local post-hoc explainability
    Martina Cinquini, Fosca Giannotti, Riccardo Guidotti, and Andrea Mattei
    In World Conference on Explainable Artificial Intelligence , Oct 2023
    RESEARCH LINE
  6. EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories
    Francesca Naretto, Roberto Pellungrini, Salvatore Rinzivillo, and Daniele Fadda
    In Discovery Science - 26th International Conference, DS 2023, Porto, Portugal, October 9-11, 2023, Proceedings , Oct 2023
    RESEARCH LINE
  7. Declarative Reasoning on Explanations Using Constraint Logic Programming
    Laura State, Salvatore Ruggieri, and Franco Turini
    CoRR, Sep 2023
    RESEARCH LINE
  8. Modeling Events and Interactions through Temporal Processes – A Survey
    Angelica Liguori, Luciano Caroprese, Marco Minici, Bruno Veloso, Francesco Spinnato, and 3 more authors
    Mar 2023
    RESEARCH LINE
  9. Geolet: An Interpretable Model for Trajectory Classification
    Cristiano Landi, Francesco Spinnato, Riccardo Guidotti, Anna Monreale, and Mirco Nanni
    Apr 2023
    RESEARCH LINE
  10. Exposing Racial Dialect Bias in Abusive Language Detection: Can Explainability Play a Role?
    Marta Marchiori Manerba, and Virginia Morini
    Jan 2023
    RESEARCH LINE
  11. Differentiable Causal Discovery with Smooth Acyclic Orientations
    Riccardo Massidda, Francesco Landolfi, Martina Cinquini, and Davide Bacciu
    In ICML 2023 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators , Jun 2023
    RESEARCH LINE
  12. Improving trust and confidence in medical skin lesion diagnosis through explainable deep learning
    Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and 2 more authors
    International Journal of Data Science and Analytics, Jun 2023
    RESEARCH LINE
  13. AUC-based Selective Classification
    Andrea Pugnana, and Salvatore Ruggieri
    In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics , 25–27 apr 2023
    RESEARCH LINE
  14. Text to Time Series Representations: Towards Interpretable Predictive Models
    Mattia Poggioli, Francesco Spinnato, and Riccardo Guidotti
    Oct 2023
    RESEARCH LINE
  15. Topics in Selective Classification
    Andrea Pugnana
    Proceedings of the AAAI Conference on Artificial Intelligence, Sep 2023
    RESEARCH LINE
  16. Understanding Any Time Series Classifier with a Subsequence-based Explainer
    Francesco Spinnato, Riccardo Guidotti, Anna Monreale, Mirco Nanni, Dino Pedreschi, and 1 more author
    ACM Transactions on Knowledge Discovery from Data, Nov 2023
    RESEARCH LINE
  17. Reason to explain: Interactive contrastive explanations (REASONX)
    Laura State, Salvatore Ruggieri, and Franco Turini
    Dec 2023
    RESEARCH LINE
  18. Effects of Route Randomization on Urban Emissions
    Giuliano Cornacchia, Mirco Nanni, Dino Pedreschi, and Luca Pappalardo
    SUMO Conference Proceedings, Jun 2023
    RESEARCH LINE
  19. Semantic Enrichment of Explanations of AI Models for Healthcare
    Luca Corbucci, Anna Monreale, Cecilia Panigutti, Michela Natilli, Simona Smiraglio, and 1 more author
    Oct 2023
  20. Interpretable Data Partitioning Through Tree-Based Clustering Methods
    Riccardo Guidotti, Cristiano Landi, Andrea Beretta, Daniele Fadda, and Mirco Nanni
    Oct 2023
    RESEARCH LINE
  21. Explaining Black-Boxes in Federated Learning
    Luca Corbucci, Riccardo Guidotti, and Anna Monreale
    Oct 2023
    RESEARCH LINE
  22. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
    Sajid Ali, Tamer Abuhmed, Shaker El-Sappagh, Khan Muhammad, Jose M. Alonso-Moral, and 5 more authors
    Information Fusion, Oct 2023
    RESEARCH LINE
  23. A Protocol for Continual Explanation of SHAP
    Andrea Cossu, Francesco Spinnato, Riccardo Guidotti, and Davide Bacciu
    Jul 2023
    RESEARCH LINE
  24. Trustworthy AI at KDD Lab
    Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Luca Pappalardo, Dino Pedreschi, and 6 more authors
    In Proceedings of the Italia Intelligenza Artificiale - Thematic Workshops co-located with the 3rd CINI National Lab AIIS Conference on Artificial Intelligence (Ital IA 2023), Pisa, Italy, May 29-30, 2023 , Jul 2023
  25. Explaining Socio-Demographic and Behavioral Patterns of Vaccination Against the Swine Flu (H1N1) Pandemic
    Clara Punzi, Aleksandra Maslennikova, Gizem Gezici, Roberto Pellungrini, and Fosca Giannotti
    In Explainable Artificial Intelligence , Oct 2023
    RESEARCH LINE
  26. The Ethical Impact Assessment of Selling Life Insurance to Titanic Passengers
    Gizem Gezici, Chiara Mannari, and Lorenzo Orlandi
    In HHAI Workshops , Oct 2023
    RESEARCH LINE
  27. The Importance of Time in Causal Algorithmic Recourse
    Isacco Beretta, and Martina Cinquini
    In Explainable Artificial Intelligence , Oct 2023
    RESEARCH LINE

2022

  1. Explaining Black Box with Visual Exploration of Latent Space
    Francesco Bodria, Salvatore Rinzivillo, Daniele Fadda, Riccardo Guidotti, Fosca Giannotti, and 1 more author
    In EuroVis 2022 - Short Papers , Oct 2022
    RESEARCH LINE
  2. Interpretable Latent Space to Enable Counterfactual Explanations
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    Dec 2022
    RESEARCH LINE
  3. Transparent Latent Space Counterfactual Explanations for Tabular Data
    Francesco Bodria, Riccardo Guidotti, Fosca Giannotti, and Dino Pedreschi
    In 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) , Oct 2022
    RESEARCH LINE
  4. Explaining Siamese Networks in Few-Shot Learning for Audio Data
    Andrea Fedele, Riccardo Guidotti, and Dino Pedreschi
    In Discovery Science - 25th International Conference, DS 2022, Montpellier, France, October 10-12, 2022, Proceedings , Dec 2022
    RESEARCH LINE
  5. Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions
    Andreas Theissler, Francesco Spinnato, Udo Schlegel, and Riccardo Guidotti
    IEEE Access, Dec 2022
    RESEARCH LINE
  6. Stable and actionable explanations of black-box models through factual and counterfactual rules
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Francesca Naretto, Franco Turini, and 2 more authors
    Data Mining and Knowledge Discovery, Dec 2022
    RESEARCH LINE
  7. Counterfactual explanations and how to find them: literature review and benchmarking
    Riccardo Guidotti
    Data Mining and Knowledge Discovery, Apr 2022
    RESEARCH LINE
  8. Exemplars and Counterexemplars Explanations for Skin Lesion Classifiers
    Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo
    Sep 2022
    RESEARCH LINE
  9. Evaluating the Privacy Exposure of Interpretable Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    In 2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2022
    RESEARCH LINE
  10. Privacy Risk of Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Sep 2022
    RESEARCH LINE
  11. Methods and tools for causal discovery and causal inference
    Ana Rita Nogueira, Andrea Pugnana, Salvatore Ruggieri, Dino Pedreschi, and João Gama
    WIREs Data Mining and Knowledge Discovery, Jan 2022
    RESEARCH LINE
  12. Understanding the impact of explanations on advice-taking: a user study for AI-based clinical Decision Support Systems
    Cecilia Panigutti, Andrea Beretta, Fosca Giannotti, and Dino Pedreschi
    In CHI Conference on Human Factors in Computing Systems , Apr 2022
    RESEARCH LINE
  13. Explaining Crash Predictions on Multivariate Time Series Data
    Francesco Spinnato, Riccardo Guidotti, Mirco Nanni, Daniele Maccagnola, Giulia Paciello, and 1 more author
    Dec 2022
    RESEARCH LINE
  14. Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions
    Andreas Theissler, Francesco Spinnato, Udo Schlegel, and Riccardo Guidotti
    IEEE Access, Dec 2022
    RESEARCH LINE
  15. Understanding peace through the world news
    Vasiliki Voukelatou, Ioanna Miliou, Fosca Giannotti, and Luca Pappalardo
    EPJ Data Science, Jan 2022
    RESEARCH LINE
  16. A Modularized Framework for Explaining Black Box Classifiers for Text Data
    Mahtab Sarvmaili, Riccardo Guidotti, Anna Monreale, Amilcar Soares, Zahra Sadeghi, and 3 more authors
    Proceedings of the Canadian Conference on Artificial Intelligence, May 2022
    https://caiac.pubpub.org/pub/71c292m6
    RESEARCH LINE
  17. Investigating Debiasing Effects on Classification and Explainability
    Marta Marchiori Manerba, and Riccardo Guidotti
    In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society , Oxford, United Kingdom, Jul 2022
    RESEARCH LINE
  18. How routing strategies impact urban emissions
    Giuliano Cornacchia, Matteo Böhm, Giovanni Mauro, Mirco Nanni, Dino Pedreschi, and 1 more author
    In Proceedings of the 30th international conference on advances in geographic information systems , Nov 2022
    RESEARCH LINE

2021

  1. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications
    Dec 2021
    RESEARCH LINE
  2. Intelligenza artificiale in ambito diabetologico: prospettive, dalla ricerca di base alle applicazioni cliniche
    Cecilia Panigutti, and Emanuele Bosi
    2021, Dec 2021
    RESEARCH LINE
  3. Deep Learning in Biology and Medicine
    Davide Bacciu, Paulo J G Lisboa, and Alfredo Vellido
    Jun 2021
    RESEARCH LINE
  4. Deriving a Single Interpretable Model by Merging Tree-Based Classifiers
    Valerio Bonsignori, Riccardo Guidotti, and Anna Monreale
    Dec 2021
    RESEARCH LINE
  5. Trustworthy AI
    Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, and 2 more authors
    Dec 2021
    RESEARCH LINE
  6. Matrix Profile-Based Interpretable Time Series Classifier
    Riccardo Guidotti, and Matteo D’Onofrio
    Frontiers in Artificial Intelligence, Oct 2021
    RESEARCH LINE
  7. Designing Shapelets for Interpretable Data-Agnostic Classification
    Riccardo Guidotti, and Anna Monreale
    In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society , Jul 2021
    RESEARCH LINE
  8. Evaluating local explanation methods on ground truth
    Riccardo Guidotti
    Artificial Intelligence, Feb 2021
    RESEARCH LINE
  9. FairShades: Fairness Auditing via Explainability in Abusive Language Detection Systems
    Marta Marchiori Manerba, and Riccardo Guidotti
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  10. Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling
    Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, and Salvatore Rinzivillo
    In 2021 IEEE Symposium on Computers and Communications (ISCC) , Sep 2021
    RESEARCH LINE
  11. FairLens: Auditing black-box clinical decision support systems
    Cecilia Panigutti, Alan Perotti, André Panisson, Paolo Bajardi, and Dino Pedreschi
    Information Processing & Management, Sep 2021
    RESEARCH LINE
  12. Occlusion-Based Explanations in Deep Recurrent Models for Biomedical Signals
    Michele Resta, Anna Monreale, and Davide Bacciu
    Entropy, Aug 2021
    RESEARCH LINE
  13. TRIPLEx: Triple Extraction for Explanation
    Mattia Setzu, Anna Monreale, and Pasquale Minervini
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  14. GLocalX - From Local to Global Explanations of Black Box AI Models
    Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and 1 more author
    Artificial Intelligence, May 2021
    RESEARCH LINE
  15. Boosting Synthetic Data Generation with Effective Nonlinear Causal Discovery
    Martina Cinquini, Fosca Giannotti, and Riccardo Guidotti
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE

2020

  1. Explaining Any Time Series Classifier
    Riccardo Guidotti, Anna Monreale, Francesco Spinnato, Dino Pedreschi, and Fosca Giannotti
    In 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) , Oct 2020
    RESEARCH LINE
  2. Black Box Explanation by Learning Image Exemplars in the Latent Feature Space
    Riccardo Guidotti, Anna Monreale, Stan Matwin, and Dino Pedreschi
    Dec 2020
    RESEARCH LINE
  3. Data-Agnostic Local Neighborhood Generation
    Riccardo Guidotti, and Anna Monreale
    In 2020 IEEE International Conference on Data Mining (ICDM) . More Information can be found here , Nov 2020
    RESEARCH LINE
  4. Explaining Image Classifiers Generating Exemplars and Counter-Exemplars from Latent Representations
    Riccardo Guidotti, Anna Monreale, Stan Matwin, and Dino Pedreschi
    Proceedings of the AAAI Conference on Artificial Intelligence, Apr 2020
    RESEARCH LINE
  5. Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars
    Orestis Lampridis, Riccardo Guidotti, and Salvatore Ruggieri
    Dec 2020
    RESEARCH LINE
  6. Predicting and Explaining Privacy Risk Exposure in Mobility Data
    Francesca Naretto, Roberto Pellungrini, Anna Monreale, Franco Maria Nardini, and Mirco Musolesi
    Dec 2020
    RESEARCH LINE
  7. Prediction and Explanation of Privacy Risk on Mobility Data with Neural Networks
    Francesca Naretto, Roberto Pellungrini, Franco Maria Nardini, and Fosca Giannotti
    Dec 2020
    RESEARCH LINE
  8. Doctor XAI: an ontology-based approach to black-box sequential data classification explanations
    Cecilia Panigutti, Alan Perotti, and Dino Pedreschi
    In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , Jan 2020
    RESEARCH LINE
  9. Global Explanations with Local Scoring
    Mattia Setzu, Riccardo Guidotti, Anna Monreale, and Franco Turini
    Dec 2020
    RESEARCH LINE

2019

  1. Helping Your Docker Images to Spread Based on Explainable Models
    Riccardo Guidotti, Jacopo Soldani, Davide Neri, Antonio Brogi, and Dino Pedreschi
    Dec 2019
    RESEARCH LINE
  2. Factual and Counterfactual Explanations for Black Box Decision Making
    Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and 1 more author
    IEEE Intelligent Systems, Nov 2019
    RESEARCH LINE
  3. Investigating Neighborhood Generation Methods for Explanations of Obscure Image Classifiers
    Riccardo Guidotti, Anna Monreale, and Leonardo Cariaggi
    Dec 2019
    RESEARCH LINE
  4. Explaining Multi-label Black-Box Classifiers for Health Applications
    Cecilia Panigutti, Riccardo Guidotti, Anna Monreale, and Dino Pedreschi
    Aug 2019
    RESEARCH LINE
  5. Meaningful Explanations of Black Box AI Decision Systems
    Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, and 1 more author
    Proceedings of the AAAI Conference on Artificial Intelligence, Jul 2019
    RESEARCH LINE

2018

  1. A Survey of Methods for Explaining Black Box Models
    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and 1 more author
    ACM Computing Surveys, Aug 2018
    RESEARCH LINE