Thu, Mar 04, 2021 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Abhinav Verma, University of Texas - Austin
Talk Title: Neurosymbolic Reinforcement Learning
Series: CS Colloquium
Abstract: Recent advances in Artificial Intelligence (AI) have been driven by deep neural networks. However, neural networks have certain well-known flaws: they are difficult to interpret and verify, have high variability, and lack domain awareness. These issues create a deficiency of trust and are hence a significant impediment to the deployment of AI in safety-critical applications. In this talk, I will present work that addresses these drawbacks via neurosymbolic learning in the reinforcement learning paradigm. Neurosymbolic agents combine experience based neural learning with partial symbolic knowledge expressed via programs in a Domain Specific Language (DSL). Using a DSL provides a principled mechanism to leverage high-level abstractionsfor machine learning models, and establishes a synergistic relationship between machine learning and program synthesis.
To overcome the challenges of policy search in non-differentiable program space we introduce a meta-algorithm that is based on mirror descent, program synthesis, and imitation learning. This approach interleaves the use of synthesized symbolic programs to regularize neural learning with the imitation of gradient-based learning to improve the quality of synthesized programs. This perspective allows us to prove robust expected regret bounds and finite-sample guarantees for this algorithm. The theoretical results guaranteeing more reliable learning are accompanied by promising empirical results on complex tasks such as learning autonomous driving agents and generating interpretable programs for behavior annotation.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Abhinav Verma is a PhD Candidate at the University of Texas at Austin, where he is advised by Swarat Chaudhuri. His research lies at the intersection of machine learning and formal methods, with a focus on building intelligent systems that are reliable, transparent, and secure. His work builds connections between the symbolic reasoning and inductive learning paradigms of artificial intelligence. He is currently supported by a JP Morgan AI Research PhD Fellowship.
Host: Mukund Raghothaman / Bistra Dilkina
Audiences: Everyone Is Invited
Contact: Assistant to CS chair