The XAI project contributed innovative research presented at the IJCAI 2025 conference on the topic of multiple perspectives in NLP systems. The study (Muscato et al., 2025) addresses the problem of human disagreement in data annotation by proposing a framework that uses soft labels to capture the diversity of opinions instead of aggregating them into a single ground truth.
The results show that multi-perspective models not only better approximate human label distributions but also achieve superior classification performance, while displaying lower confidence in inherently subjective tasks such as irony detection.
References
2025
Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems
Benedetta
Muscato, Lucia
Passaro, Gizem
Gezici, and Fosca
Giannotti
In Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence , Sep 2025
In the realm of Natural Language Processing (NLP), common approaches for handling human disagreement consist of aggregating annotators’ viewpoints to establish a single ground truth. However, prior studies show that disregarding individual opinions can lead to the side-effect of under-representing minority perspectives, especially in subjective tasks, where annotators may systematically disagree because of their preferences. Recognizing that labels reflect the diverse backgrounds, life experiences, and values of individuals, this study proposes a new multi-perspective approach using soft labels to encourage the development of the next generation of perspective-aware models—more inclusive and pluralistic. We conduct an extensive analysis across diverse subjective text classification tasks including hate speech, irony, abusive language, and stance detection, to highlight the importance of capturing human disagreements, often overlooked by traditional aggregation methods. Results show that the multi-perspective approach not only better approximates human label distributions, as measured by Jensen-Shannon Divergence (JSD), but also achieves superior classification performance (higher F1-scores), outperforming traditional approaches. However, our approach exhibits lower confidence in tasks like irony and stance detection, likely due to the inherent subjectivity present in the texts. Lastly, leveraging Explainable AI (XAI), we explore model uncertainty and uncover meaningful insights into model predictions. All implementation details are available at our github repo.
@inproceedings{MPG2025,author={Muscato, Benedetta and Passaro, Lucia and Gezici, Gizem and Giannotti, Fosca},booktitle={Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence},collection={IJCAI-2025},doi={10.24963/ijcai.2025/1092},line={4,5},month=sep,open_access={Gold},pages={9827–9835},publisher={International Joint Conferences on Artificial Intelligence Organization},series={IJCAI-2025},title={Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems},visible_on_website={YES},year={2025}}