Wed, Nov 16, 2022 @ 03:00 PM - 04:30 PM
Thomas Lord Department of Computer Science
PhD Candidate: Aleksei Petrenko
Thesis title: High-Throughput Methods for Simulation and Deep Reinforcement Learning
Committee members: Gaurav S. Sukhatme (chair), Stefanos Nikolaidis, Jesse Thomason, Mike Zyda, and Rahul Jain
Location: RTH 306
Date: November 16. 2022
Time: 3 pm
Zoom link: https://usc.zoom.us/j/8712894950
Advances in computing hardware and machine learning have enabled a data-driven approach to robotic autonomy where control policies are learned from raw data via interactive experience collection and learning. In this thesis we discuss a specific implementation of this approach: we show how control policies can be trained in simulated environments using model-free deep reinforcement learning techniques and then be deployed on real robotic systems.
We build towards this vision by developing tools for efficient simulation and learning under a constrained computational budget. We improve systems design of reinforcement learning algorithms and simulators to create high-throughput GPU-accelerated infrastructure for rapid experimentation. We then apply these systems and algorithms to continuous control problems in challenging domains. We first consider the problem of quadrotor swarm coordination. By scaling up training in a CPU-based flight simulator we train robust policies that are able to control physical quadrotors flying in tight formations. We then use large batch reinforcement learning in a massively parallel physics simulator IsaacGym to learn dexterous object manipulation with a multi-fingered robotic hand and we transfer these skills from simulation to reality using automatic domain randomization.
The high-throughput learning infrastructure developed for these and other projects is released as an open-source codebase "Sample Factory 2.0" to facilitate and accelerate further progress in the field.
WebCast Link: https://usc.zoom.us/j/8712894950
Audiences: Everyone Is Invited
Contact: Lizsl De Leon