CS Colloquium: Sungjin Ahn (University of Montreal) -Recent Advances and the Future of Recurrent Neural Networks
Tue, Jan 17, 2017 @ 11:00 AM - 12:20 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Sungjin Ahn, University of Montreal
Talk Title: Recent Advances and the Future of Recurrent Neural Networks
Series: CS Colloquium
Abstract: This lecture satisfies requirements for CSCI 591: Computer Science Research Colloquium.
Although the recent resurgence of Recurrent Neural Networks (RNN) has achieved remarkable advances in sequence modeling, we are still missing many abilities of RNN necessary to model more challenging yet important natural phenomena. In this talk, I introduce some recent advances in this direction, focusing on two new RNN architectures: the Hierarchical Multiscale Recurrent Neural Networks (HM-RNN) and the Neural Knowledge Language Model (NKLM). In the HM-RNN, each layer in a multi-layered RNN learns different time-scales, adaptively to the inputs from the lower layer. The NKLM deals with the problem of incorporating factual knowledge provided by knowledge graph into RNNs. I argue the advantages of these models and then conclude the talk with a discussion on the key challenges that lie ahead.
Biography: Sungjin Ahn is currently a postdoctoral researcher at the University of Montreal, working with Prof. Yoshua Bengio on deep learning and its applications. He received his Ph.D. in Computer Science at the University of California, Irvine, under the supervision of Prof. Max Welling. During his Ph.D. program, He co-developed the Stochastic Gradient MCMC algorithms and awarded two best paper awards from the International Conference on Machine Learning in 2012 and the ParLearning 2016, respectively. His research interests include deep learning (on recurrent neural networks, deep generative models), approximate Bayesian inference, and reinforcement learning.
Host: Yan Liu
Audiences: Everyone Is Invited
Contact: Assistant to CS chair