Fri, Jun 23, 2017 @ 10:30 AM - 11:30 PM
Computer Science, Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Yu Wang, Tsinghua University
Talk Title: Software-Hardware Co-Design for Efficient Neural Network Acceleration on FPGA
Abstract: Artificial neural networks, efficiency compared with general-purpose processors. However, the long development period and insufficient performance of traiditional FPGA acceleration prevent it from wide utilization. We propose a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA [FPGA 16, FPGA 17 best paper]. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN are proposed together with compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with real-world neural networks, up to 15 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU. Finally, we will discuss the possibilities and trends of adopting emerging NVM technology for efficient learning systems to further improve the energy efficiency.
Biography: Yu Wang is currently a tenured Associate Professor with the Department of Electronic Engineering, Tsinghua University. He received his B.S. degree in 2002 and Ph.D. degree (with honor) in 2007 from Tsinghua University, Beijing. He has published over 150 papers in refereed journals and conferences in Design Automation and FPGA related area. His research interests include brain inspired computing, application specific hardware computing, parallel circuit analysis, and power/reliability aware system design methodology.
Host: Viktor Prasanna, email@example.com
Audiences: Everyone Is Invited
Contact: Kathy Kassar