Ethics and Legal

Explainability is one of the most important ethical and legal values identified by EU regulations (GDPR, AI act). However, when we design a trustworthy AI System it is important to take into consideration also the other ethical and legal values such as privacy, fairness, safety, robustness, etc. This requires analyzing the interplay among the different values and understanding whether they are in contrast or not.

In this direction, the work of Line 5 has started the investigation of the interplay between explainability and privacy, and explainability and fairness under different viewpoints. In particular, the research of this line tries to answer the following questions:

Can explanation methods, introducing a level of transparency in the whole decision process, might jeopardize individual privacy of people represented in the training data? Can explanation methods help in understanding the reason for possible ethical risks associated with the use of AI systems (e.g., privacy violations and biased behaviors)? Can explanation methods be fundamental for discovering other ethical issues like unfair behavior of AI systems? Starting from these questions, we have scientifically contributed with the following studies. Concerning the ability of explanation methods to explain some ethical risks such as privacy and biases, in [NPM2020,NPN2020] we propose EXPERT, an EXplainable Privacy ExposuRe predicTion framework. It is a tool that exploits Explainability as a tool for increasing privacy user awareness. We applied EXPERT on the privacy risk prediction and explanation of both tabular data [NPM2020] and sequential data [NPN2020]. In the first setting we considered the mobility context where for each user we have the historical movements (a spatio-temporal trajectory). In this context, EXPERT using the privacy risk exposure module extracts from human mobility data an individual mobility profile describing the mobility behavior of any user. Second, for each user it simulates a privacy attack and quantifies the associated privacy risk. Third, it uses the mobility profiles of the users with their associated privacy risks to train a ML model. For a new user, along with the prediction of risk, EXPERT also provides an explanation of the predicted risk generated by the risk explanation module. EXPERT exploits two state-of-the-art explanation techniques, i.e., SHAP and LORE [GMG2019]. In the second setting [NPN2020] the prediction phase is not based on a mobility profile but on the trajectory itself and given the absence of features we used only SHAP as an explanation method. In [PPB2021] we also present FairLens, a framework able to detect and explain potential bias issues in Clinical Decision Support Systems (DSS). FairLens allows testing the clinical DSS before its deployment, i.e., before handling it to final decision-makers such as physicians and nurses. FairLens takes bias analysis a step further by explaining the reasons behind the poor model performance on specific groups; in particular, it uses GlocalX to explain which elements in the patients’ clinical histories are influencing the misclassification.

Concerning the use of explainability as a means for discovering unfair behaviors, in [MG2021], we propose FairShades, a model-agnostic approach for auditing the outcomes of abusive language detection systems. FairShades combines explainability and fairness evaluation within a proactive pipeline to identify unintended biases and sensitive categories toward which the black box model under assessment is most discriminative. It is a task-specific approach for abusive language detection: it can be used to test the fairness of any abusive language detection system working on any textual dataset. However, its ideal application is on sentences containing protected identities, i.e., expressions referring to nationality, gender, etc., as the primary scope is to uncover biases and not explain the reasons for the prediction.

Research line people

img Guidotti
Riccardo
Guidotti

Assitant Professor
University of Pisa


R.LINE 1 ▪ 3 ▪ 4 ▪ 5

img Turini
Franco
Turini

Full Professor
University of Pisa


R.LINE 1 ▪ 2 ▪ 5

img Rinzivillo
Salvo
Rinzivillo

Researcher
ISTI - CNR Pisa


R.LINE 1 ▪ 3 ▪ 4 ▪ 5

img Beretta
Andrea
Beretta

Researcher
ISTI - CNR Pisa


R.LINE 1 ▪ 4 ▪ 5

img Monreale
Anna
Monreale

Associate Professor
University of Pisa


R.LINE 1 ▪ 4 ▪ 5

img Panigutti
Cecilia
Panigutti

Phd Student
Scuola Normale


R.LINE 1 ▪ 4 ▪ 5

img Pellungrini
Roberto
Pellungrini

Researcher
University of Pisa


R.LINE 5

img Naretto
Francesca
Naretto

Post Doctoral Researcher
Scuola Normale


R.LINE 1 ▪ 3 ▪ 4 ▪ 5

img Marchiori Manerba
Marta
Marchiori Manerba

Phd Student
University of Pisa


R.LINE 1 ▪ 2 ▪ 5

img Punzi
Clara
Punzi

Phd Student
Scuola Normale


R.LINE 1 ▪ 5

img Pugnana
Andrea
Pugnana

Researcher
Scuola Normale


R.LINE 2

img Lage De Sousa Leitão
António Maria
Lage De Sousa Leitão

Phd Student
Scuola Normale


R.LINE 1


Line 5 - Publications

2025

  1. Embracing Diversity: A Multi-Perspective Approach with Soft Labels
    Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti, and 1 more author
    Sep 2025
    RESEARCH LINE
  2. SafeGen: safeguarding privacy and fairness through a genetic method
    Martina Cinquini, Marta Marchiori Manerba, Federico Mazzoni, Francesca Pratesi, and Riccardo Guidotti
    Machine Learning, Sep 2025
    RESEARCH LINE
  3. A Bias Injection Technique to Assess the Resilience of Causal Discovery Methods
    Martina Cinquini, Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi, and Riccardo Guidotti
    IEEE Access, Sep 2025
    RESEARCH LINE
  4. Differentially Private FastSHAP for Federated Learning Model Explainability
    Valerio Bonsignori, Luca Corbucci, Francesca Naretto, and Anna Monreale
    In 2025 International Joint Conference on Neural Networks (IJCNN) , Jun 2025
    RESEARCH LINE
  5. Balancing Fairness and Interpretability in Clustering with FairParTree
    Cristiano Landi, Alessio Cascione, Marta Marchiori Manerba, and Riccardo Guidotti
    Oct 2025
    RESEARCH LINE
  6. Towards Building a Trustworthy RAG-Based Chatbot for the Italian Public Administration
    Chandana Sree Mala, Christian Maio, Mattia Proietti, Gizem Gezici, Fosca Giannotti, and 3 more authors
    Sep 2025
    RESEARCH LINE
  7. Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems
    Benedetta Muscato, Lucia Passaro, Gizem Gezici, and Fosca Giannotti
    In Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence , Sep 2025
    RESEARCH LINE
  8. FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
    Xenia Heilmann, Luca Corbucci, Mattia Cerrato, and Anna Monreale
    Dec 2025
    RESEARCH LINE
  9. Evaluating the Privacy Exposure of Interpretable Global and Local Explainers.
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Dec 2025
    RESEARCH LINE

2024

  1. The ALTAI checklist as a tool to assess ethical and legal implications for a trustworthy AI development in education
    Andrea Fedele, Clara Punzi, and Stefano Tramacere
    Computer Law & Security Review, Jul 2024
    RESEARCH LINE
  2. FairBelief - Assessing Harmful Beliefs in Language Models
    Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, and Debora Nozza
    In Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024) , Jul 2024
    RESEARCH LINE
  3. Social Bias Probing: Fairness Benchmarking for Language Models
    Marta Marchiori Manerba, Karolina Stanczak, Riccardo Guidotti, and Isabelle Augenstein
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , Jul 2024
    RESEARCH LINE
  4. Mapping the landscape of ethical considerations in explainable AI research
    Luca Nannini, Marta Marchiori Manerba, and Isacco Beretta
    Ethics and Information Technology, Jun 2024
    RESEARCH LINE
  5. Analysis of exposome and genetic variability suggests stress as a major contributor for development of pancreatic ductal adenocarcinoma
    Giulia Peduzzi, Alessio Felici, Roberto Pellungrini, Francesca Giorgolo, Riccardo Farinella, and 7 more authors
    Digestive and Liver Disease, Jun 2024
    RESEARCH LINE
  6. Multi-Perspective Stance Detection
    Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, and Fosca Giannotti
    Dec 2024
    RESEARCH LINE
  7. Beyond Headlines: A Corpus of Femicides News Coverage in Italian Newspapers
    Eleonora Cappuccio, Benedetta Muscato, Laura Pollacci, Marta Marchiori Manerba, Clara Punzi, and 5 more authors
    Dec 2024
    RESEARCH LINE
  8. A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions
    Luca Pappalardo, Emanuele Ferragina, Salvatore Citraro, Giuliano Cornacchia, Mirco Nanni, and 9 more authors
    Dec 2024
    RESEARCH LINE
  9. The ethical impact assessment of selling life insurance to titanic passengers
    Gezici, Gizem; Mannari, Chiara; Orlandi, and  Lorenzo
    Dec 2024
    RESEARCH LINE
  10. XAI in healthcare
    Gezici G.; Metta C; Beretta A.; Pellungrini R.; Rinzivillo S.; Pedreschi D.; Giannotti F.
    Dec 2024
    RESEARCH LINE
  11. Interpretable and Fair Mechanisms for Abstaining Classifiers
    Daphne Lenders, Andrea Pugnana, Roberto Pellungrini, Toon Calders, Dino Pedreschi, and 1 more author
    Dec 2024
    RESEARCH LINE

2023

  1. Effects of Route Randomization on Urban Emissions
    Giuliano Cornacchia, Mirco Nanni, Dino Pedreschi, and Luca Pappalardo
    SUMO Conference Proceedings, Jun 2023
    RESEARCH LINE
  2. EXPHLOT: EXplainable Privacy Assessment for Human LOcation Trajectories
    Francesca Naretto, Roberto Pellungrini, Salvatore Rinzivillo, and Daniele Fadda
    Jun 2023
    RESEARCH LINE
  3. Exposing Racial Dialect Bias in Abusive Language Detection: Can Explainability Play a Role?
    Marta Marchiori Manerba, and Virginia Morini
    Jun 2023
    RESEARCH LINE

2022

  1. Privacy Risk of Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Sep 2022
    RESEARCH LINE
  2. Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients
    Himanshi Allahabadi, Julia Amann, Isabelle Balot, Andrea Beretta, Charles Binkley, and 52 more authors
    IEEE Transactions on Technology and Society, Dec 2022
    RESEARCH LINE
  3. Evaluating the Privacy Exposure of Interpretable Global Explainers
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    In 2022 IEEE 4th International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2022
    RESEARCH LINE
  4. Investigating Debiasing Effects on Classification and Explainability
    Marta Marchiori Manerba, and Riccardo Guidotti
    In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society , Jul 2022
    RESEARCH LINE

2021

  1. Trustworthy AI
    Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, and 2 more authors
    Jul 2021
    RESEARCH LINE
  2. FairShades: Fairness Auditing via Explainability in Abusive Language Detection Systems
    Marta Marchiori Manerba, and Riccardo Guidotti
    In 2021 IEEE Third International Conference on Cognitive Machine Intelligence (CogMI) , Dec 2021
    RESEARCH LINE
  3. Explainable AI Within the Digital Transformation and Cyber Physical Systems: XAI Methods and Applications
    Dec 2021
    RESEARCH LINE

2020

  1. Prediction and Explanation of Privacy Risk on Mobility Data with Neural Networks
    Francesca Naretto, Roberto Pellungrini, Franco Maria Nardini, and Fosca Giannotti
    Dec 2020
    RESEARCH LINE
  2. Predicting and Explaining Privacy Risk Exposure in Mobility Data
    Francesca Naretto, Roberto Pellungrini, Anna Monreale, Franco Maria Nardini, and Mirco Musolesi
    Dec 2020
    RESEARCH LINE
  3. Rischi etico-legali dell’Intelligenza Artificiale
    Monreale Anna
    Dec 2020
    RESEARCH LINE