BEGIN:VCALENDAR
METHOD:PUBLISH
PRODID:-//Apple Computer\, Inc//iCal 1.0//EN
X-WR-CALNAME;VALUE=TEXT:USC
VERSION:2.0
BEGIN:VEVENT
DESCRIPTION:Speaker: Chi Jin, UC Berkely
Talk Title: Machine Learning: Why Do Simple Algorithms Work So Well?
Series: CS Colloquium
Abstract: While state-of-the-art machine learning models are deep, large-scale, sequential and highly nonconvex, the backbone of modern learning algorithms are simple algorithms such as stochastic gradient descent, or Q-learning (in the case of reinforcement learning tasks). A basic question endures---why do simple algorithms work so well even in these challenging settings? \n
\n
This talk focuses on two fundamental problems: (1) in nonconvex optimization, can gradient descent escape saddle points efficiently? (2) in reinforcement learning, is Q-learning sample efficient? We will provide the first line of provably positive answers to both questions. In particular, we will show that simple modifications to these classical algorithms guarantee significantly better properties, which explains the underlying mechanisms behind their favorable performance in practice.\n
\n
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Chi Jin is a Ph.D. candidate in Computer Science at UC Berkeley, advised by Michael I. Jordan. He received a B.S. in Physics from Peking University. His research interests lie in machine learning, statistics, and optimization, with his PhD work primarily focused on nonconvex optimization and reinforcement learning.
Host: Haipeng Luo
SEQUENCE:5
DTSTART:20190307T110000
LOCATION:OHE 132
DTSTAMP:20190307T110000
SUMMARY:CS Colloquium: Chi Jin (UC Berkeley) Machine Learning: Why Do Simple Algorithms Work So Well?
UID:EC9439B1-FF65-11D6-9973-003065F99D04
DTEND:20190307T120000
END:VEVENT
END:VCALENDAR