Cognitive Amplification - TEDx Cremona Salon


Date: December 05, 2023


img Cognitive Amplification - TEDx Cremona Salon

AI amplifies human cognitive abilities in making complex decisions. Fosca Giannotti's vision promotes collaboration and the expansion of knowledge, emphasizing the importance of responsible AI use for conscious and anti-conformist innovation.

This talk was given at a TEDx event in the city of Cremona, Italy.

(activate english subtitles for the translation)



Transcribed and translated below:

Good evening. In the collective imagination, we often associate artificial intelligence with a tool that replaces humans, perhaps in complex activities like driving a vehicle, generating a text, or creating an image. However, there is a deeper opportunity: that of artificial intelligence capable of amplifying human cognitive abilities, helping them to make complex decisions, such as diagnosing a rare disease or managing an environmental crisis.

Cognitive amplification means enhancing rather than replacing, with machines designed for thinking. This implies the ability to collaborate, mutual understanding, and a shared framework of rules. This is what we call reliable, human-centered artificial intelligence—AI tailored to people. These systems can learn from the data we've provided, curated, and shared, but they are designed to empower individuals and society in all dimensions within a framework of shared ethical and legal values. This includes non-discrimination, non-manipulation, human oversight, respect for human rights, transparency, and ultimately, ensuring that the final decision remains with humans.

This is the vision the European scientific community and the European Commission aim to formulate and implement through various synergistic actions. Let me tell you about the basic building blocks of this vision and what is needed: transparency and explanation to empower. When faced with a difficult decision, would you prefer to consult a friend who always has an opinion on everything and can give you the right advice, or a friend who helps you think by asking questions but never gives an opinion? For difficult decisions, we probably need a bit of both—machines that tell us the pros and cons, highlighting our biases and those of the machines. This is what we mean by opening the black box—providing AI technologies that can do this, machines capable of collaboration, and knowing when to defer to human decision-makers.

We refer to these as Socratic machines. They can handle new or uncertain cases without hesitation, deferring to human decision-makers. An example I like to use is the Dr. House model, an intelligent expert collaborating with a team of equally intelligent experts who challenge him, propose new hypotheses and data, but ultimately, the final decision and responsibility rest with Dr. House. This means machines for thinking, helping us transition from rational to intuitive systems, focusing on important matters.

There is also a social dimension to this human-centered AI. We exist within socio-technical systems where machines and people interact and influence each other. However, a group of intelligent individuals does not necessarily form an intelligent group. If navigation systems give everyone the same advice and route, traffic congestion increases. This is an issue with current recommendation systems, which tend to keep us in a state of conformity. This conformity is not always beneficial in the long run. Therefore, balancing conformity and diversity is important. Even at the social level, socio-technical systems should adhere to the same values of non-conformity and non-manipulation.

There are many pitfalls—data traps, algorithm traps, and power concentration traps in computational capacities. Machine learning systems trained by big tech companies are not always robust or fair. We want AI to stay within the value system we’ve discussed, ensuring it supports doctors, for example. We know how to achieve this for predictive AI, but for generative AI, we are still figuring out what robustness and transparency mean.

However, we don’t want to miss the advantages of having artificial companions that help us within the framework of these values. As my friend and science fiction writer Isaac Asimov said, in the face of useful technologies, we’ve managed to implement necessary safeguards for safe usage. We did this with electrical wires by adding insulation, with knives by adding handles, and with stairs by adding railings. The more effort we put into safety, the more we can utilize the technology. This is exactly the vision the European community is pursuing with the new AI regulation—implementing stricter rules for riskier services and relaxing them for less dangerous applications. This is a path to prepare for the future, aiming not to hinder innovation but to move towards responsible innovation. We must defend this at all costs.

The images I showed were generated using Adobe’s generative AI system, which trains on images that authors have made available. Every time we generate an image, royalties are paid, respecting authors’ rights by design. Therefore, we can respect and enforce values like copyright, moving towards responsible innovation. Thank you.