(Activate translation for english language)
Black box AI systems for automated decision making, often based on ML over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic both for the lack of transparency and also for possible biases inherited by the algorithms from prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines, requiring good communication, trust, clarity, and understanding.