Select a calendar:
Filter April Events by Event Type:
SUNMONTUEWEDTHUFRISAT
Events for April 08, 2022
-
ECE-EP Seminar - Jae-sun Seo, Friday, April 8th at 10am via Zoom
Fri, Apr 08, 2022 @ 10:00 AM - 11:00 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Jae-sun Seo, Arizona State University
Talk Title: Energy-Efficient AI Chip Designs with Digital and Analog Circuits
Abstract: AI algorithms have been widespread across many practical applications, e.g. convolutional neural networks (CNNs) for computer vision, long short-term memory (LSTM) for speech recognition, etc., but state-of-the-art algorithms are compute-/memory-intensive, posing challenges for AI hardware to perform inference and training tasks with high throughput and low power consumption, especially on area-/energy-constrained edge devices.
In this talk, I will present our recent research of several energy-efficient AI ASIC accelerators on both all-digital chips and analog/mixed-signal circuit based chips. These include (1) a 40nm CNN inference accelerator with conditional computing and low external memory access, (2) a 28nm CNN training accelerator exploiting dynamic activation/weight sparsity, and (3) a 28nm programmable in-memory computing (IMC) inference accelerator integrating 108 capacitive-coupling-based IMC SRAM macros. We will discuss the digital/analog circuits and architecture design, as well as hardware-aware algorithms employed for the proposed energy-efficient AI accelerators. Based on the demonstrated advantages and challenges of digital and analog AI chip designs, emerging research directions for new AI hardware with new device/circuit/architecture/algorithm design considerations will be discussed.
Biography: Jae-sun Seo received the Ph.D. degree from the University of Michigan, Ann Arbor in 2010. From 2010 to 2013, he was with IBM T. J. Watson Research Center, working on the DARPA SyNAPSE project and next-generation processor designs. Since 2014, he has been with Arizona State University, where he is currently an Associate Professor in the School of ECEE. He was a visiting faculty at Intel Circuits Research Lab in 2015. His research interests include efficient hardware design of machine learning algorithms and neuromorphic computing. Dr. Seo was a recipient of IBM Outstanding Technical Achievement Award (2012), NSF CAREER Award (2017), and Intel Outstanding Researcher Award (2021). He has served on the technical program committees for ISSCC, MLSys, DAC, DATE, ICCAD, etc.
Host: ECE-Electrophysics
More Information: Jae-sun Seo Flyer.pdf
Audiences: Everyone Is Invited
Contact: Marilyn Poplawski
-
CILQ Internal Seminar
Fri, Apr 08, 2022 @ 12:00 PM - 01:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Keith Chugg, Professor, USC
Talk Title: Co-Design of Algorithms and Hardware for Deep Neural Networks
Abstract: Neural networks are in wide use in cloud computing platforms. This includes inference and training with the latter typically performed on programmable processors with multiply-accumulate (MAC) accelerator arrays (e.g., GPUs). In many applications, it can be describable to train on an edge device or using energy efficient application specific circuits. In this talk I will present some research results on application specific hardware acceleration methods for neural networks. Pre-defined sparsity is a method to reduce the complexity of training and inference. In contrast to pruning approaches which remove edges/weights during or after training, this approach sets a pre-defined pattern of sparse connection prior to training and holds this pattern fixed during training and inference. This allows one to design the pattern of sparsity to match a specific hardware acceleration architecture. We also consider Logarithmic Number Systems (LNS) for implementation of training. With LNS, operations are performed on the log of the quantities and therefore multiplies are simplified to addition while additions are more complex in the log domain. We present some preliminary results for LNS training and highlight ongoing challenges in applying this to larger, more complex networks. In many of these approaches we borrow from the design and implementation of iterative decoders for digital communication systems.
Host: CILQ
Webcast: https://usc.zoom.us/j/92417517950?pwd=WUkycy90cndVQko5R3RhQ1U3STBDdz09More Information: ChuggSeminar-Apr8-2022.pdf
Location: via zoom
WebCast Link: https://usc.zoom.us/j/92417517950?pwd=WUkycy90cndVQko5R3RhQ1U3STBDdz09
Audiences: Everyone Is Invited
Contact: Corine Wong