-
PhD Defense - KR Zentner
Thu, Feb 01, 2024 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
Title: Leveraging Cross-Task Transfer in Sequential Decision Problems: Scalable Reinforcement Learning for Robotics
Defense Committee: Gaurav Sukhatme (chair), Heather Culbertson, Stefanos Nikolaidis, Laurent Itti, Bhaskar Krishnamachari
Date: Feb 1, 2024, 2 p.m. - 4 p.m. - RTH 217
Abstract: The past few years have seen an explosion of interest in using machine learning to make robots capable of learning a diverse set of tasks. Potentially, these robots could operate in close proximity to humans, assisting humans with a wide variety of needs and being instructed to perform new tasks as needed. However, these robots generally use Reinforcement Learning to learn detailed sub-second interactions, but consequently require large amounts of data for each task. In this thesis we explore how Reinforcement Learning can be combined with Transfer Learning to re-use data across tasks. We begin by reviewing the state of Multi-Task and Meta RL and describe the motivations for using Transfer Learning. Then, we describe a basic framework for using Transfer Learning to efficiently learn multiple tasks, and show how it requires predicting how effectively transfer can be performed across tasks. Next, we present a simple rule, based in information theory, for predicting the effectiveness of Cross-Task Transfer, which we call the "Transfer Cost Rule." We discuss the theoretical implications of that rule, and show various quantitative evaluations of it. Then, we show two directions of work making use of our insights to perform efficient Transfer Reinforcement Learning. The first of these directions uses Cross-Task Co-Learning and Plan Conditioned Behavioral Cloning to share skill representations produced by a Large Language Model, and is able to learn many tasks from a single demonstration each in a simulated environment. The second of these directions uses Two-Phase KL Penalization to enforce a (potentially off-policy) trust region. These advances in Transfer RL may enable robots to be used in a wider range of applications, such as in the home or office. The insight provided by the Transfer Cost Rule may also be relevant to a wide audience of Reinforcement Learning practitioners, since it provides a practical and theoretically grounded explanation for the performance of Deep Reinforcement Learning algorithms.
Zoom link: https://usc.zoom.us/j/96965616504?pwd=QngwQTJsTXJkbXJJNU9hRVV2Mk1DQT09Location: Ronald Tutor Hall of Engineering (RTH) - 217
Audiences: Everyone Is Invited
Contact: CS Events
Event Link: https://usc.zoom.us/j/96965616504?pwd=QngwQTJsTXJkbXJJNU9hRVV2Mk1DQT09