Assessing Privacy Exposure in Global vs Local Explainers



img Assessing Privacy Exposure in Global vs Local Explainers

A privacy risk assessment study (Naretto et al., 2025) analyzes how interpretable global and local explanation methods may expose sensitive training information.

It surveys recent membership and attribute inference threats, framing how explanation formats can amplify leakage. The computational framework benchmarks exposure scenarios, motivating privacy-aware design of transparent systems.

The work supports integrating explainability with compliance practices (e.g., GDPR) without sacrificing accountability.


References

2025

  1. Evaluating the Privacy Exposure of Interpretable Global and Local Explainers.
    Francesca Naretto, Anna Monreale, and Fosca Giannotti
    Dec 2025
    RESEARCH LINE