BEGIN:VCALENDAR BEGIN:VEVENT SUMMARY:Theory Lunch DESCRIPTION:Speaker: Mengxiao Zhang, CS PhD Student Talk Title: Gradient Descent Provably Optimizes Over-Parameterized Neural Networks Abstract: This talk is on the paper "Gradient Descent Provably Optimizes Over-Parameterized Neural Networks," which is about how techniques like gradient descent have zero training loss even for objective functions that are non-convex and non-smooth. Host: Shaddin Dughmi DTSTART:20191114T121500 LOCATION:SAL 213 URL;VALUE=URI: DTEND:20191114T140000 END:VEVENT END:VCALENDAR