ACAI 2021 - The advanced course on AI on Human Centered AI

Acai 2021


1. Explainable Machine Learning for Trustworthy AI

Fosca Giannotti, CNR ■ Riccardo Guidotti, University of Pisa

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results. This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, and surveys the literature with a focus on machine learning and symbolic AI related approaches. Then, the challenges and current achievements of the ERC project "XAI: Science and technology for the eXplanation of AI decision making" will be also presented. We will motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.

2. Social Artificial Intelligence

Dino Pedreschi, University of Pisa ■ Frank Dignum, Umeå University

For AI scientists and social scientists, the challenge is how to achieve better understanding of how AI technologies could support or affect emerging social challenges, and how to shape and regulate human-centered AI ecosystems that help mitigate harms and foster beneficial outcomes oriented at the social good. This tutorial will discuss this challenge from two sides. First, we will look at the network effects of AI and their impact on society. Examples range from urban mobility, with travellers helped by smart assistants to fulfill their agendas, to the public discourse and the markets, where diffusion of opinions as well as economic and financial decisions are shaped by personalized recommendation systems. In principle, AI can empower communities to face complex societal challenges. Or it can create further vulnerabilities and exacerbate problems, such as bias, inequalities, polarization, and depletion of social goods. We will investigate the role of AI in these situations based on data based simulations that can be used to study the network effects of particular AI driven individual behavior. Secondly, we will look at the use of behavioral models as an addition to the data based approach in order to get further grip on emerging phenomena in society that do not only depend on e.g. social media, but also depend on physical events for which no data are readily available. An example of this is tracking extremist behavior in order to prevent violent events. An extreme event such as the storming of the capitol in January 2021 can be traced back in twitter, but not every potential extremist hashtag in twitter leads to violent behavior. There are also physical contacts, influences and group structures outside social media that play a big role in the process. We will look at some case studies in depth and discuss approaches to analyse them with the appropriate tools.