Symbolic planning in the neural age - towards generalizable, interpretable and reliable autonomy
10 April 2025 | Sala Stringa - Online | 11:00 | Daniele Meli (University of Verona) and Celeste Veronese (University of Verona)
Abstract
Data-driven AI, particularly reinforcement learning, has achieved outstanding results in autonomous planning, requiring minimal or null domain knowledge and solving complex single and multi-agent tasks. However, the pitfall of most algorithms is the high sample inefficiency. This prevents the generalization out of training scenarios, which is a fundamental requirement of truly intelligent artificial agents.
In this talk, we will show that symbolic AI can still play a crucial role in achieving generalizable embodied intelligence, thanks to its inherent abstraction, interpretability and generalization capabilities. In particular, we will first show a framework for integrated symbolic task planning and learning of robotic agents, exploiting human feedback to progressively refine domain knowledge and achieve enhanced reliability and explainability. Then, in the second part of the talk we will show how to realize a synergistic neurosymbolic integration for interpretable, adaptable and generalizable reinforcement learning, inspired from the popular theory of mind by Kahneman (thinking fast and slow).
Bio
Daniele Meli received his Master’s degree in Automation Engineering from Politecnico di Bari, IT, in 2017, and his PhD in Computer Science at University of Verona in 2021. He is currently a research fellow and assistant professor in Artificial intelligence at University of Verona. His research is mainly focused on robotics and artificial intelligence, specifically on explainable and trustworthy AI, inductive logic programming and merging symbolic and probabilistic AI.
Celeste Veronese is a second-year PhD student at the University of Verona, supervised by Prof. Alessandro Farinelli and Daniele Meli, PhD. Her current research focuses on the integration of symbolic, logic-based reasoning and Reinforcement Learning to improve the efficiency, interpretability, and generalization of AI systems, particularly in complex decision-making environments.