Select a calendar:
Filter April Events by Event Type:
Events for the 2nd week of April
-
Center of Autonomy and AI, Center for Cyber-Physical Systems and the Internet of Things, and Ming Hsieh Institute Seminar Series
Wed, Apr 06, 2022 @ 02:00 PM - 03:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Aaron Johnson, Mechanical Engineering at Carnegie Mellon University
Talk Title: The Trouble with Contact: State Estimation and Control Generation for Discontinuous Systems
Series: Center for Cyber-Physical Systems and Internet of Things
Abstract: Contact with the outside world is challenging for robots due to its inherently discontinuous nature -- when a foot or hand is touching a surface the forces are completely different than if it is just above the surface. However, most of our computational and analytic tools for planning, learning, and control assume continuous (if not smooth or even linear) systems. Simple models of contact make assumptions (like plasticity and coulomb friction) that are known to not only be wrong physically but also inconsistent. In this talk I will present techniques for overcoming these challenges in order to adapt smooth methods to systems that have changing contact conditions. In particular I will focus on two topics: First, I will present the "Salted Kalman Filter" for state estimation over hybrid systems. Second, I will show a few techniques for generating new controllers with changing contact conditions, using both higher-order direct collocation and hybrid iLQR.
Biography: Prof. Johnson is an Assistant Professor in Mechanical Engineering at Carnegie Mellon University, working on legged robots, adaptive controls, contact-rich manipulation, physics based planning & learning, and terrain manipulation as director of the Robomechanics Lab. Previously, his postdoc focused on convergent manipulation planning algorithms in the Personal Robotics Lab at Carnegie Mellon University. He received his PhD in 2014 on self-manipulation and dynamic behaviors on legged robots (among other things) in Kod*lab at the University of Pennsylvania. He is the recipient of the NSF Career award, the ARO Young Investigator Award, and the CMU George Tallman Ladd Research Award.
Host: Pierluigi Nuzzo and Feifei Qian
Webcast: https://usc.zoom.us/webinar/register/WN_zyIBh_1gQLmKpMJG0GyLxwLocation: Online
WebCast Link: https://usc.zoom.us/webinar/register/WN_zyIBh_1gQLmKpMJG0GyLxw
Audiences: Everyone Is Invited
Contact: Talyia White
-
ECE-EP Seminar - Jae-sun Seo, Friday, April 8th at 10am via Zoom
Fri, Apr 08, 2022 @ 10:00 AM - 11:00 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Jae-sun Seo, Arizona State University
Talk Title: Energy-Efficient AI Chip Designs with Digital and Analog Circuits
Abstract: AI algorithms have been widespread across many practical applications, e.g. convolutional neural networks (CNNs) for computer vision, long short-term memory (LSTM) for speech recognition, etc., but state-of-the-art algorithms are compute-/memory-intensive, posing challenges for AI hardware to perform inference and training tasks with high throughput and low power consumption, especially on area-/energy-constrained edge devices.
In this talk, I will present our recent research of several energy-efficient AI ASIC accelerators on both all-digital chips and analog/mixed-signal circuit based chips. These include (1) a 40nm CNN inference accelerator with conditional computing and low external memory access, (2) a 28nm CNN training accelerator exploiting dynamic activation/weight sparsity, and (3) a 28nm programmable in-memory computing (IMC) inference accelerator integrating 108 capacitive-coupling-based IMC SRAM macros. We will discuss the digital/analog circuits and architecture design, as well as hardware-aware algorithms employed for the proposed energy-efficient AI accelerators. Based on the demonstrated advantages and challenges of digital and analog AI chip designs, emerging research directions for new AI hardware with new device/circuit/architecture/algorithm design considerations will be discussed.
Biography: Jae-sun Seo received the Ph.D. degree from the University of Michigan, Ann Arbor in 2010. From 2010 to 2013, he was with IBM T. J. Watson Research Center, working on the DARPA SyNAPSE project and next-generation processor designs. Since 2014, he has been with Arizona State University, where he is currently an Associate Professor in the School of ECEE. He was a visiting faculty at Intel Circuits Research Lab in 2015. His research interests include efficient hardware design of machine learning algorithms and neuromorphic computing. Dr. Seo was a recipient of IBM Outstanding Technical Achievement Award (2012), NSF CAREER Award (2017), and Intel Outstanding Researcher Award (2021). He has served on the technical program committees for ISSCC, MLSys, DAC, DATE, ICCAD, etc.
Host: ECE-Electrophysics
More Information: Jae-sun Seo Flyer.pdf
Audiences: Everyone Is Invited
Contact: Marilyn Poplawski
-
CILQ Internal Seminar
Fri, Apr 08, 2022 @ 12:00 PM - 01:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Keith Chugg, Professor, USC
Talk Title: Co-Design of Algorithms and Hardware for Deep Neural Networks
Abstract: Neural networks are in wide use in cloud computing platforms. This includes inference and training with the latter typically performed on programmable processors with multiply-accumulate (MAC) accelerator arrays (e.g., GPUs). In many applications, it can be describable to train on an edge device or using energy efficient application specific circuits. In this talk I will present some research results on application specific hardware acceleration methods for neural networks. Pre-defined sparsity is a method to reduce the complexity of training and inference. In contrast to pruning approaches which remove edges/weights during or after training, this approach sets a pre-defined pattern of sparse connection prior to training and holds this pattern fixed during training and inference. This allows one to design the pattern of sparsity to match a specific hardware acceleration architecture. We also consider Logarithmic Number Systems (LNS) for implementation of training. With LNS, operations are performed on the log of the quantities and therefore multiplies are simplified to addition while additions are more complex in the log domain. We present some preliminary results for LNS training and highlight ongoing challenges in applying this to larger, more complex networks. In many of these approaches we borrow from the design and implementation of iterative decoders for digital communication systems.
Host: CILQ
Webcast: https://usc.zoom.us/j/92417517950?pwd=WUkycy90cndVQko5R3RhQ1U3STBDdz09More Information: ChuggSeminar-Apr8-2022.pdf
Location: via zoom
WebCast Link: https://usc.zoom.us/j/92417517950?pwd=WUkycy90cndVQko5R3RhQ1U3STBDdz09
Audiences: Everyone Is Invited
Contact: Corine Wong