Assessing Privacy Exposure in Global vs Local Explainers
2025
·
Publication
A privacy risk assessment study (Naretto et al., 2025) analyzes how interpretable global and local explanation methods may expose sensitive training information.
It surveys recent membership and attribute inference threats, framing how explanation formats can amplify leakage. The computational framework benchmarks exposure scenarios, motivating privacy-aware design of transparent systems.
The work supports integrating explainability with compliance practices (e.g., GDPR) without sacrificing accountability.
References
2025
Evaluating the Privacy Exposure of Interpretable Global and Local Explainers.
Francesca
Naretto, Anna
Monreale, and Fosca
Giannotti
During the last few years, the abundance of data has significantly boosted the performance of Machine Learning models, integrating them into several aspects of daily life. However, the rise of powerful Artificial Intelligence tools has introduced ethical and legal complexities. This paper proposes a computational framework to analyze the ethical and legal dimensions of Machine Learning models, focusing specifically on privacy concerns and interpretability. In fact, recently, the research community proposed privacy attacks able to reveal whether a record was part of the black-box training set or inferring variable values by accessing and querying a Machine Learning model. These attacks highlight privacy vulnerabilities and prove that GDPR regulation might be violated by making data or Machine Learning models accessible. At the same time, the complexity of these models, often labelled as “black-boxes”, has made the development of explanation methods indispensable to enhance trust and facilitate their acceptance and adoption in high-stake scenarios.
@misc{NMG2025,author={Naretto, Francesca and Monreale, Anna and Giannotti, Fosca},line={1,5},month=dec,publisher={Trans. Data Priv. 18 (2), 67-93},title={Evaluating the Privacy Exposure of Interpretable Global and Local Explainers.},year={2025}}