Stochastic Regularizer for High Dimensional Small-Sampled Data and Online learning for Time Series Forecasting
Fri, Jun 14, 2019 @ 10:30 AM - 11:30 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: SergÃ¼l AydÃ¶re, Department of Electrical and Computer Engineering of Stevens Institute of Technology
Talk Title: Stochastic Regularizer for High Dimensional Small-Sampled Data and Online learning for Time Series Forecasting
Abstract: In this talk, I will focus on developing efficient Machine Learning algorithms for two situations. One situation is high dimensional, small sampled and noisy data situations as in neuroscience, biology or geology where data collection is expensive. This phenomenon is known as curse of dimensionality that causes overfitting which is often an obstacle for using machine learning techniques. We formulated a structured stochastic regularization that relies on feature grouping. Using a fast clustering algorithm, we define a family of groups of features that capture feature covariations. Inside a stochastic gradient descent loop, we then randomly average these features. Experiments on two real-world datasets demonstrate that our approach produces models that generalize better than those trained with conventional regularizers, and also improves convergence speed, and has a linear computational cost. Another challenging situation that I will talk about is updating a machine learning model with streaming data without iterating through previously seen data. This is also known as \"online learning\" and one application is forecasting time series. The performance of online learning algorithms is typically evaluated by the regret. We introduce a new local regret for non-convex models in dynamic environments. We present an update rule incurring a cost, according to our proposed regret, which is sublinear in time T. Our update uses time-smoothed gradients. Using a real-world dataset we show that our time-smoothed approach yields several benefits when compared with state-of-the-art competitors: results are more stable against new data; training is more robust to hyperparameter selection; and our approach is more computationally efficient than the alternatives.
Biography: Sergul Aydore has been an Assistant Professor at the Department of Electrical and Computer Engineering of Stevens Insitute of Technology in August 2018. Before joining Stevens, Sergul was a Machine Learning Scientist at Amazon\'s demand forecasting where she built neural network models to predict the demands of millions of products to enable better in-stock positions. She is also an associate member of Parietal team at Inria, Saclay. She received her PhD degree from the Signal and Image Processing Institute at the University of Southern California in 2014. Her PhD work was on developing robust connectivity measures for neuroimaging data. Prior to Amazon, Sergul was a postdoctoral researcher at Columbia University where she implemented machine learning models for EEG data. She then spent a year as a Data Scientist at JP Morgan. She received her B.S. and M.S. degrees in Electrical and Electronics Engineering from Bogazici University, Istanbul where she developed signal processing techniques to investigate biomedical signals. She was a recipient of the Viterbi School of Engineering Doctoral fellowship and was recognized as a 2014 USC Ming Hsieh Institute Ph.D. Scholar.
Host: Professor Richard Leahy
Audiences: Everyone Is Invited
Contact: Talyia White