-
PhD Defense- Hongkuan Zhou
Fri, May 17, 2024 @ 10:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Student Activity
PhD Defense- Hongkuan Zhou
Title: Scaling up Temporal Graph Learning: Powerful Models, Efficient Algorithms, and Optimized Systems
Committee Members: Prof. Keith Michael Chugg, Prof. Rajgopal Kannan, Prof Viktor K. Prasanna (Chair), Prof. Mukund Raghothaman
Abstract: Recently, Temporal Graph Neural Networks (TGNNs) have extended the scope of Graph Representation Learning (GRL) to dynamic graphs. TGNNs generate high-quality and versatile dynamic node embeddings by simultaneously encoding the graph structures, node and edge contexts, and their temporal dependencies. TGNNs are shown to demonstrably outperform traditional dynamic graph analytic algorithms in impactful applications that address critical real-world challenges, such as social network analysis, healthcare applications, and traffic prediction and management. However, due to the challenges of the prevalent noise in real-world data, irregular memory accesses, complex temporal dependencies, and high computation complexity, current TGNNs face the following problems when scaling to large dynamic graphs: (1) Unpowerful models. Current TGNN models struggle to capture high-frequency information and handle the diverse and dynamic noise. (2) Ineffcieint algorithms. Current training algorithms cannot leverage the massive parallel processing architecture of modern hardware, while current inference algorithms cannot meet the requirements in different scenarios. And (3) Unoptimized systems. Current TGNN systems suffer from inefficient designs that hinder overall performance. In this dissertation, we address the above issues via model-algorithm-system co-design. For model improvements, we propose a static node-memory-enhanced TGNN model and a temporal adaptive sampling technique. For algorithm improvements, we propose a scalable distributed training algorithm with heuristic guidelines to achieve the optimal configuration, and a versatile inference algorithm. For system improvements, we propose techniques of dynamic feature caching, simplified temporal attention, etc., to compose optimized training and inference systems. We demonstrate significant improvements in accuracy, training time, inference latency, and throughput compared with state-of-the-art TGNN solutions.
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Hongkuan Zhou