BEGIN:VCALENDAR
METHOD:PUBLISH
PRODID:-//Apple Computer\, Inc//iCal 1.0//EN
X-WR-CALNAME;VALUE=TEXT:USC
VERSION:2.0
BEGIN:VEVENT
DESCRIPTION:Speaker: Jason Lee, University of Southern California
Talk Title: Towards Theoretical Understanding of Over-Parametrization in Deep Learning
Series: Fall 2018 Joint CSC@USC/CommNetS-MHI Seminar Series
Abstract: We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a k hidden node shallow network with quadratic activation and n training data points, we show that as long as k >= sqrt(2n) over-parametrization enables local search algorithms to find a globally optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, we show that with weight decay, the solution also generalizes well. \n
\n
Next, we analyze the implicit regularization effects of various optimization algorithms. In particular we prove that for least squares with mirror descent, the algorithm converges to the closest solution in terms of the Bregman divergence. For linearly separable classification problems, we prove that the steepest descent with respect to a norm solves SVM with respect to the same norm. For over-parametrized non-convex problems such as matrix sensing or neural net with quadratic activation, we prove that gradient descent converges to the minimum nuclear norm solution, which allows for both meaningful optimization and generalization guarantees. \n
\n
This is a joint work with Suriya Gunasekar, Mor Shpigel, Daniel Soudry, Nati Srebro, and Simon Du.
Biography: Jason Lee is an assistant professor in Data Sciences and Operations at the University of Southern California. Prior to that, he was a postdoctoral researcher at UC Berkeley working with Michael Jordan. Jason received his PhD at Stanford University advised by Trevor Hastie and Jonathan Taylor. His research interests are in statistics, machine learning, and optimization. Lately, he has worked on high dimensional statistical inference, analysis of non-convex optimization algorithms, and theory for deep learning.\n
\n
Host: Mihailo Jovanovic, mihailo@usc.edu
SEQUENCE:5
DTSTART:20180827T140000
LOCATION:EEB 132
DTSTAMP:20180827T140000
SUMMARY:CSC@USC/CommNetS-MHI Seminar Series
UID:EC9439B1-FF65-11D6-9973-003065F99D04
DTEND:20180827T153000
END:VEVENT
END:VCALENDAR