Distinguished seminars on Explainable AI

The Distinguished seminars on Explainable AI last for 90 minutes, the first 45 are dedicated to the seminar and the rest to a round table, we will allow some other guests to be able to ask questions to deepen the topic or opening our minds. Our goal is to bring together bright minds to give talks that are focused on various aspects that can affect Explainability and Artificial Intelligence to foster learning and inspirations that matter.


Cynthia Rudin

Let us consider a difficult computer vision challenge. Would you want an algorithm to determine whether you should get a biopsy, based on an xray? That's usually a decision made by a radiologist, based on years of training. We know that algorithms haven't worked perfectly for a multitude of other computer vision applications, and biopsy decisions are harder than just about any other application of computer vision that we typically consider. The interesting question is whether it is possible that an algorithm could be a true partner to a physician, rather than making the decision on its own. To do this, at the very least, we would need an interpretable neural network that is as accurate as its black box counterparts.

This talk will discuss two approaches to interpretable neural networks: (1) case-based reasoning, where parts of images are compared to other parts of prototypical images for each class, and (2) neural disentanglement, using a technique called concept whitening. The case-based reasoning technique is strictly better than saliency maps, and the concept whitening technique provides a strict advantage over the posthoc use of concept vectors.

Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. She is also an associate director of the Statistical and Applied Mathematical Sciences Institute (SAMSI). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University.

Cynthia Rudin

Apr 20, 5.00pm - 6.30pm CEST

Interpretable Neural Networks for Computer Vision: Clinical Decisions that are Computer-Aided, not Automated

The talk will face the topic of clinical decision-making and interpretable deep neural networks. The main question is whether it is possible that an algorithm could be a true partner to a physician, rather than making the decision on its own. Two approaches will be discussed 1) case-based reasoning and 2) neural disentanglement, covering the advantages of a technique called concept whitening.


  • Chair - Fosca Giannotti
  • Discussants - Riccardo Guidotti, Luca Pappalardo, Cecilia Panigutti

follow on twitter
Przemek Biecek

Explanatory Model Analysis Explore, Explain and Examine Predictive Models is a set of methods and tools designed to build better predictive models and to monitor their behaviour in a changing environment. Today, the true bottleneck in predictive modelling is neither the lack of data, nor the lack of computational power, nor inadequate algorithms, nor the lack of flexible models. It is the lack of tools for model exploration (extraction of relationships learned by the model), model explanation (understanding the key factors influencing model decisions) and model examination (identification of model weaknesses and evaluation of model's performance). This book presents a collection of model agnostic methods that may be used for any black-box model together with real-world applications to classification and regression problems.

Associate professor at Warsaw University of Technology and the University of Warsaw. Interested in model visualisation, explanatory model analysis, predictive modelling and applications in healthcare. Graduated in software engineering and mathematical statistics. In 2016, he formed the research group MI2DataLab which develops methods and tools for predictive model analysis. Member of the ResponsibleAI group in the GPAI initiative.

Przemyslaw Biecek

May 25, 5.00pm - 6.30pm CEST

Trust, but verify. How explainable artificial intelligence can be used to validate predictive models.

The talk will have three parts. In the first I will show how vulnerable XAI methods are to attacks, and why we should approach explanations with a great deal of scepticism. In the second I will show that many of the XAI methods can be used effectively for model exploration, I will talk about Explanatory Model Analysis and the Rashomon perspective. In the third, I will talk about the grammar for interactive model explanation and the advantages from a fast feedback loop in human-model interaction.


  • Chair - Fosca Giannotti
  • Discussants - Salvo Rinzivillo, Anna Monreale, Francesca Pratesi

follow on twitter
Ruth Byrne

"Ruth Byrne is the Professor of Cognitive Science at Trinity College Dublin, University of Dublin, in the School of Psychology and the Institute of Neuroscience, a Chair created for her by the University in 2005.

Her research expertise is in the cognitive science of human thinking, including experimental and computational investigations of reasoning and imaginative thought. Her most recent book is a co-edited volume with Kinga Morsanyi on 'Thinking, Reasoning, and Decision-making in Autism' (2019, Routledge). She has also written 'The Rational Imagination: How People Create Alternatives to Reality' published in 2005 by MIT press (and selected for open peer commentary by the Behavioral and Brain Sciences journal in 2007), and 'Deduction', co-authored with Phil Johnson-Laird, published in 1991 by Erlbaum Associates. She has published over 100 articles in journals such as Annual Review of Psychology, Cognition, Cognitive Psychology, Cognitive Science, Current Directions in Psychological Science, Psychological Review, and Trends in Cognitive Sciences."

Ruth Byrne

June 15, 5.00pm - 6.30pm CEST

The psychology of counterfactual explanations in XAI

The use of counterfactuals in Explainable Artificial Intelligence (XAI) can benefit from experimental discoveries in psychology on how people create counterfactual alternatives to reality and how they reason from counterfactual conditionals. I discuss the cognitive processes that people rely on when they engage in counterfactual thought, compared to causal thought. I illustrate some of the similarities and differences between counterfactual and causal thought with evidence from our recent eye-tracking studies on how people comprehend counterfactual conditionals and causal assertions. I demonstrate the relevance of this evidence to XAI and describe our recent experiments on the effects of counterfactual and causal explanations on how people understand the decisions of AI systems. I outline potential future directions for the refinement of the use of counterfactuals in XAI.

Freddy Lecue

As artificial intelligence has become tightly intervened in the society having tangible consequences and influence, calls for explainability and interpretability of these systems has also become increasingly prevalent. Explainable AI (XAI) attempts to alleviate concerns of transparency, trust and ethics in AI by making them accountable, interpretable and explainable to humans. This workshop aims to encapsulate these concepts under the umbrella of Explainable Agency and bring together researchers and practitioners working in different facets of explainable AI from diverse backgrounds to share challenges, new directions and recent research in the field. We especially welcome research from fields including but not limited to artificial intelligence, human-computer interaction, human-robot interaction, cognitive science, human factors and philosophy.

Dr. Freddy Lecue is the Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) at Thales in Montreal - Canada. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis - France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008.

Freddy Lecue

July 13, 5.00pm - 6.30pm CEST

Explanation in AI: Watch the Semantic Gap!

The term XAI refers to a set of tools for explaining AI systems of any kind, beyond Machine Learning. Even though these tools aim at addressing explanation in the broader sense, they are not designed for all users, tasks, contexts and applications. This presentation will describe recent progress to date on XAI, with a focus on Machine Learning and its needs of semantics, by reviewing its approaches, motivation, industrial applications, and limitations.


  • Chair - Fosca Giannotti
  • Discussants - Mattia Setzu, Cecilia Panigutti

follow on twitter