-
Center for Systems and Control (CSC@USC) and Ming Hsieh Institute for Electrical Engineering
Mon, Oct 30, 2017 @ 02:00 PM - 03:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Anil Aswani, University of California, Berkeley
Talk Title: Making Robust Decisions from Data
Series: Fall 2017 Joint CSC@USC/CommNetS-MHI Seminar Series
Abstract: Though machine learning has found success in decision-making contexts, these methods are fragile to model mismatch and malicious interference. This is a major impediment to the deployment of automated decision-making in safety-critical systems like those found in healthcare or physical infrastructure. This talk describes three methods we have developed for robust decision-making in different scenarios. The first is a framework for combining robust control with machine learning, and applications to energy efficiency and robotics are highlighted. The second is algorithms to solve inverse optimization (and inverse reinforcement learning) with noisy data. This problem arises when estimating utility functions or modeling human-automation systems, and we show it is NP-hard and that existing approaches are statistically inconsistent. We develop a polynomial time algorithm that is asymptotically optimal as more data is collected. Then we discuss applications of our inverse optimization approach to a clinical trial on personalized goal-setting through smartphone apps to increase physical activity, and to studying an incentive design problem in the Medicare Shared Savings Program where we show that an investment sharing plan could potentially save Medicare an additional $85 million per year. The third is an approach for bandit models where repeated application of an action causes habituation and a decrease of that action's rewards, while refraining from an action causes recovery and an increase of that action's awards. Though such problems are PSPACE-complete, we define a class of models called ROGUE bandits for which we can construct policies that achieve logarithmic regret. We describe an application of ROGUE bandits to a personalized healthcare problem of choosing an optimal sequence of daily messages to encourage an individual to increase their physical activity.
Biography: Anil Aswani is an Assistant Professor in Industrial Engineering and Operations Research (IEOR) at UC Berkeley. He received his B.S. in Electrical Engineering from the University of Michigan in 2005, M.S. in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2007, and Ph.D. in EECS from UC Berkeley in 2010. He received a Hellman Fellowship for his research on food insecurity, the Leon O. Chua award from Berkeley for outstanding achievement in an area of nonlinear science, and a William Pierskalla Runner-Up Award from the INFORMS Health Applications Society. His research interests include data-driven decision making, with particular emphasis on addressing inefficiencies and inequities in health systems and physical infrastructure.
Host: Insoon Yang, insoonya@usc.edu
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Gerrielyn Ramos