After five years of intense research, the Science and technology for the explanation of AI decision making (XAI) project has officially concluded.
The project’s journey has led to a groundbreaking new research line formalized as “Human-AI Coevolution”, recently published in the prestigious Artificial Intelligence journal (Pedreschi et al., 2025).
This flagship work explores the mechanistic understanding of how human decision-making and AI systems influence each other over time, marking a significant evolution from the project’s initial focus on local explanations.
The XAI team thanks all collaborators, visiting professors, and the ERC for supporting this vision.
References
2025
Human-AI coevolution
Dino
Pedreschi, Luca
Pappalardo, Emanuele
Ferragina, Ricardo
Baeza-Yates, Albert-László
Barabási, and
12 more authors
Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users’ choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: (i) outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; (ii) propose a reflection at the intersection between complexity science, AI and society; (iii) provide real-world examples for different human-AI ecosystems; and (iv) illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i.e., scientific, legal and socio-political.
@article{PPF2025,address={Netherlands},author={Pedreschi, Dino and Pappalardo, Luca and Ferragina, Emanuele and Baeza-Yates, Ricardo and Barabási, Albert-László and Dignum, Frank and Dignum, Virginia and Eliassi-Rad, Tina and Giannotti, Fosca and Kertész, János and Knott, Alistair and Ioannidis, Yannis and Lukowicz, Paul and Passarella, Andrea and Pentland, Alex Sandy and Shawe-Taylor, John and Vespignani, Alessandro},doi={10.1016/j.artint.2024.104244},issn={0004-3702},journal={Artificial Intelligence},line={1},month=feb,open_access={Gold},pages={104244},publisher={Elsevier BV},title={Human-AI coevolution},visible_on_website={YES},volume={339},year={2025}}