Approaching AI-explainability from interdisciplinary research in symbolic AI

Andrea Vestrucci
Research professor at Starr King School, Oakland, California
Location: Department of Computer Science - University of Pisa - Aula Polifunzionale 14.30
Research in AI-explainability focuses on a plurality of goals, from trustworthiness to informativeness to fairness. Current research and development of symbolic AI systems might address and foster interactions between combinations of these goals, especially within an interdisciplinary framework. I present explorations and assessments of philosophical and ethical arguments in automated reasoning environments that might improve to the use of symbolic AI systems as trustworthiness-checkers, with specific emphasis on machine compliance with logical and ethical constraints. I also discuss how these applications can enhance the degree of concrete, community-based human-machine interactions, with specific focus on our collaboration in the Bamberg “Smart City” project. I outline ongoing investigations on belief revision that bears the potential increase of human accessibility to complex epistemic representations.
Bio
Andrea Vestrucci is a research professor at Starr King School, Oakland, California, and a visiting scholar at the University of Bamberg, chair of AI System Engineering (AISE). His work at AISE focuses on AI ethics (metaethics of regulations) and AI epistemology (computational philosophy in FOL and HOL). His research expands upon the notions of universal languages and abstract objects. He is a recipient of the Australia Award (Ministry of Tertiary Education, Skills, Science and Research) and a laureate of the Academic Society of Geneva.