BEGIN:VCALENDAR
BEGIN:VEVENT
SUMMARY:PhD Defense - James Preiss
DESCRIPTION:PhD Candidate: James Preiss\n
\n
Title: Characterizing and Improving Robot Learning: A Control-Theoretic Perspective\n
\n
Date/time:\n
May 10, 2022, 2:00-4:00pm PDT\n
\n
Location:\n
In-person: RTH 306\n
Zoom: https://usc.zoom.us/j/3224457297\n
\n
Committee:\n
Gaurav S. Sukhatme (chair)\n
Nora Ayanian\n
Ashutosh Nayyar\n
Stefanos Nikolaidis\n
\n
\n
Abstract:\n
\n
The interface between machine learning and control has enabled robots to move outside the laboratory into challenging real-world settings. Deep reinforcement learning can scale empirically to very complex systems, but we do not yet understand precisely when and why it succeeds. Control theory focuses on simpler systems, but delivers interpretability, mathematical understanding, and guarantees. We present projects that combine these strengths.\n
\n
In empirical work, we propose a framework for tasks with complex dynamics but known reward functions. We restrict the use of learning to the dynamics modeling stage, and act based on this model using traditional state-space control. We apply this framework to robotic manipulation of deformable objects.\n
\n
In theoretical work, we deploy the well-understood linear quadratic regulator (LQR) problem as a test case to "look inside" algorithms and problem structure. First, we investigate how reinforcement learning algorithms depend on properties of the dynamical system by bounding the variance of the REINFORCE policy gradient estimator as a function of the LQR system matrices. Second, we introduce the framework of suboptimal covering numbers to quantify how much a good multi-system policy must change with respect to the dynamics parameters, and bound the covering number for a simple class of LQR systems.\n
DTSTART:20220510T140000
LOCATION:RTH 306
URL;VALUE=URI:
DTEND:20220510T160000
END:VEVENT
END:VCALENDAR