Science and technology for the explanation of ai decision making.
The XAI project, focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems in the context of ai based decision making, aiming at empowering individual against undesired effects of automated decision making, implementing the “right of explanation”, helping people make better decisions preserving (and expand) human autonomy.
Black box AI systems for automated decision making, often based on ML over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic both for the lack of transparency and also for possible biases inherited by the algorithms from prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines, requiring good communication, trust, clarity, and understanding.
- This project aims at developing:
- an explanation infrastructure for benchmarking, equipped with platforms for the users' assessment of the explanations;
- an ethical-legal framework, in compliance with the provisions of the GDPR;
- a repertoire of case studies in explanation-by-design, mainly focused on health and fraud detection applications.
Upcoming Seminars
and tutorials, round tables, conferences...
Dec 14, 2023
h. 10:00Presenter: Kode
Officine Garibaldi - il Cantiere delle Idee, Via Vincenzo Gioberti, 39, 56124 Pisa PI, Italia
Title: Xai Library - tutorial
When: 14/12/13
Where: Officine Garibaldi.
Abstract: The Kode team (Paolo Cintia and Andrea Spinelli) will present the recent refactoring of the XAI Library to the XAI group, offering a tutorial in which they will illustrate the use of the new library.
Streaming: The link to follow the online tutorial will be available on the Events channel in the Teams Group: XAI@KDD.
----------
Microsoft Teams meetingJoin on your computer, mobile app or room deviceClick here to join the meeting<https://teams.microsoft.com/l/meetup-join/19%3a511689e6d6494b2c95e95fe823c57aae%40thread.tacv2/1702308391444?context=%7b%22Tid%22%3a%22c7456b31-a220-47f5-be52-473828670aa1%22%2c%22Oid%22%3a%22729b4d16-0567-46a8-a742-d2ae1bf09a4a%22%7d>Meeting ID: 362 749 499 63Passcode: 7bBvuQ
Nov 30, 2023
h. 11:30Presenter: Riccardo Guidotti
Officine Garibaldi - il Cantiere delle Idee, Via Vincenzo Gioberti, 39, 56124 Pisa PI, Italy
When: 30/11/2023 at 11:30
Where: Officine Garibaldi. Stream details below.
Abstract: While existing clustering methods only provide the assignment of records to clusters without justifying the partitioning, we propose tree-based clustering methods that offer interpretable data partitioning through a shallow decision tree. These decision trees enable easy-to-understand explanations of cluster assignments through short and understandable split conditions. The proposed methods are evaluated through experiments on synthetic and real datasets and proved to be more effective than traditional clustering approaches and interpretable ones in terms of standard evaluation measures and runtime.
_____________
Microsoft Teams meeting
Join on your computer, mobile app or room device
Click here to join the meeting
Meeting ID: 366 914 674 820
Passcode: 9gKaZe
Download Teams | Join on the web
Or call in (audio only)
+39 02 3056 4191,,906266144# Italy, Milano
Phone Conference ID: 906 266 144#
Find a local number | Reset PIN
_____________
Nov 09, 2023
h. 11:30Presenter: Eleonora Cappuccio
Officine Garibaldi - il Cantiere delle Idee, Via Vincenzo Gioberti, 39, 56124 Pisa PI, Italia
This seminar will present FIPER, a visualization tool that combines explanations through rules and feature importance.
An initial overview of the importance of designing human-centered explanations will be given. Use cases will be highlighted, and the results of a preliminary user test will be presented. The main purpose of the seminar will be to show and discuss new developments of the tool and possible applications.
Oct 26, 2023
h. 11:30Presenter: Giovanni Puccetti
Officine Garibaldi - il Cantiere delle Idee, Via Vincenzo Gioberti, 39, 56124 Pisa PI, Italy
When & Where Thursday 26th of October, at 11:30 @ Officine Garibaldi. Stream details below.
Abstract Language Models, of all sizes, have improved at a fast pace during the last years. However, besides measures of performance on downstream tasks, it is hard to understand what degree of linguistic knowledge they have and even more difficult to understand their inner workings.
Through linguistic probing of language models such as Bert and Roberta, I investigate their ability to encode linguistic properties and find a link between this ability and the phenomenon of outliers, parameters within language models that show unexpected behaviours. These findings help understand some of the properties that are typical of the attention mechanism at the core of such models.
Outliers also have a strong impact on the downstream performance of language models, therefore I apply these models to Named Entity Recognition in patents and study how they perform in this setting.
Finally, I present a brief study on the fine-tuning of Large Language Models to Italian.
______
Microsoft Teams meeting
Join on your computer, mobile app or room device
Click here to join the meeting
Meeting ID: 374 418 319 658
Passcode: zMGdht
Download Teams | Join on the web
Or call in (audio only)
+39 02 3056 4191,,45663040# Italy, Milano
Phone Conference ID: 456 630 40#
Find a local number | Reset PIN
______
Oct 19, 2023
h. 11:30Presenter: Marzio Di Vece
Officine Garibaldi - il Cantiere delle Idee, Via Vincenzo Gioberti, 39, 56124 Pisa PI, Italia
Welcome seminar by new postdoc @SNS, Marzio Di Vece, who will tell us about his past and current research interests.
When & Where
October 19th, 11:30am @ Officine Garibaldi.
Stream details below.
____
Microsoft Teams meeting
Join on your computer, mobile app or room device
Click here to join the meeting
Meeting ID: 372 902 472 996
Passcode: igJK69
Partners
Scuola Normale Superiore
CNR
National Research Council
University of Pisa
Department of Computer Science
The 5 XAI
research lines
The XAI project faces the challenge of requiring AI to be explainable and understandable in human terms and articulates its research along 5 Research Activities: 1 algorithms to infer local explanations and their generalization to global ones (post-hoc) and algorithms that are transparent by-design; 2 languages for expressing explanations in terms of logic rules, with statistical and causal interpretation; 3 XAI watchdog platform for sharing experimental dataset and explanation algorithms; 4 a repertoire of case studies aimed at in involving also final users ; 5 a framework to study the interplay between XAI and ethical and legal dimensions.