Science and technology for the explanation of ai decision making.

The Xai project focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems in the context of ai based decision making, aiming at empowering individual against undesired effects of automated decision making, implementing the right of explanation, helping people make better decisions preserving (and expand) human autonomy.

(Activate translation for english language)

Black box AI systems for automated decision making, often based on ML over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic both for the lack of transparency and also for possible biases inherited by the algorithms from prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines, requiring good communication, trust, clarity, and understanding.

The XAI project is structured in five main research lines, each one addressing a specific aspect of the problem of explainable AI.

Selected Resources

A selection of papers, library and tools published by the XAI project team.

XAI project is affiliated with Z-Inspection®

The Project intends to encourage debate and reflection on the responsible use of Artificial Intelligence, acting as a meeting place for the international community, such as for example and not exclusively, the Z-Inspection® assessment method for Trustworthy AI and its affiliated labs. The Z-Inspection® approach is a validated assessment method that helps organizations deliver ethically sustainable, evidence-based, trustworthy, and user-friendly AI-driven solutions. The method is published in IEEE Transactions on Technology and Society.

z-inspection logo

Z-Inspection® is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.