Events, tutorials, round tables, conferences and more...
Event organised by EU Commission to bring European AI Excellence & Trust approaches to the world
Artificial Intelligence for Covid-19 prognosis: aiming at accuracy and explainability
organized by LeADS (Legally Attentive Data Scientists) in collaboration with the Brussels Privacy Hub.
The Fosca Giannotti's course on Explainable AI, within the Italian national Ph.D., starts
Lecture by Fosca Giannotti "Human Artificial Intelligence: where do we go from here?"
Fosca Giannotti is the main speaker at Panel 2 "Building trust through Explainable AI complying with the European AI regulation"
Giannotti will give an introductory talk on Trustworthy AI dimensions. Ruggieri will explain the perils of discrimination and the solutions ensuring fairness in AI
A contribution by Giannotti and Pedreschi to the book created by CISV / Ong 2.0, AIxIA and the University of Bari, with the support of the Ministry for Foreign Affairs [Free to download]
Calls for admissions to the National PhD in Artificial Intelligence (PhD-AI.it) are now open!
Falling walls foundation | Jean-Pierre Bourguignon, Edith Heard, Fosca Giannotti, Sabina Leonelli
In this tutorial Riccardo shows how to employ existing explanation libraries on tabular datasets
Artificial Intelligence The ineluctable revolution, virtual session on“Artificial Intelligence: A blessing or a threat for society",
Introducing the foundation themes of the project. Fosca Giannotti held a talk on the main challenges related to Trustworthy AI