-
PhD defense - Yaguang Li
Tue, Apr 23, 2019 @ 01:00 PM - 03:00 PM
Thomas Lord Department of Computer Science
University Calendar
Ph.D. Defense - Yaguang Li
Tue, April 23rd, 2019
1:00 pm - 3:00 pm
Location: PHE 325
Title:
Spatiotemporal Prediction with Deep Learning on Graphs
PhD Candidate: Yaguang Li
Date, Time, and Location: Tuesday, April 23rd, 2019 at 1pm in PHE 325
Committee: Prof. Cyrus Shahabi, Prof. Yan Liu, and Prof. Antonio Ortega
Abstract:
Spatiotemporal data is ubiquitous in our daily life, ranging from climate science, via transportation, social media, to various dynamical systems. The data is usually collected from a set of correlated objects over time, where objects can be sensors, locations, regions, particles, users, etc. For instance, in the transportation network, road sensors constantly record the traffic data at various correlated locations; in social networks, we observe activity data of correlated users, as indicated by friendships, evolving over time, and in dynamical systems, e.g., physics, climate, we observe the movement of particles interacting with each other. Spatiotemporal prediction aims to model the evolution of a set of correlated objects. It has various applications, ranging from classical subjects such as intelligent transportation system, climate science, social media, physics simulation to emerging fields of sustainability, Internet of Things (IoT) and health-care.
Spatiotemporal prediction is challenging mainly due to the complicated spatial dependencies and temporal dynamics. In this thesis, we study the following important questions in spatiotemporal prediction: (1) How to model complex spatial dependency among objects that are usually non-Euclidean and multimodal? (2) How to model the non-linear and non-stationary temporal dynamics for accurate long-term prediction? (3) How to infer the correlations or interactions among objects when they are not provided nor can be constructed a prior?
To model the complex spatial dependency, we represent the non-Euclidean pair-wise correlations among objects using directed graphs and then propose the novel diffusion graph convolution which captures the spatial dependency with bidirectional random walks on the graph. To model the multimodal correlations among objects, we further propose the multi-graph convolution network. To model the non-linear and non-stationary temporal dynamics, we integrate the novel diffusion graph convolution into the recurrent neural network to jointly model the spatial and temporal dependencies. To capture the long-term temporal dependency, we propose to use the sequence to sequence architecture with scheduled sampling. To utilize the global contextual information in the temporal correlation modeling, we further propose the contextual gated recurrent neural network which augments the recurrent neural network with a contextual-aware gating mechanism to re-weights different historical observations. To infer correlation among objects, we propose a structure-informed variational graph autoencoder based model, which infers the explicit interactions considering both observed movements and structural prior knowledge, e.g., node degree distribution, edge type distribution, and sparsity. The model represents the structural prior knowledge as differentiable constraints on the interaction graph and optimizes it using gradient-based methods.
We conduct extensive experiments on multiple real-world large-scale datasets for various spatiotemporal prediction tasks, including traffic forecasting, spatiotemporal demand forecasting, travel time estimation, relational inference and simulation. The results show the proposed models consistently achieve clear improvements over state-of-the-art methods. The proposed models and their variants have been deployed in real-world large-scale systems for applications including road traffic speed prediction, Internet traffic forecasting, air quality forecasting, travel time estimation, and spatiotemporal demand forecasting.
Location: Charles Lee Powell Hall (PHE) - 325
Audiences: Everyone Is Invited
Contact: Lizsl De Leon