Events for March 09, 2022
-
ECE Seminar: Algebraic Neural Networks: Stability to Deformations
Wed, Mar 09, 2022 @ 10:00 AM - 11:00 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Alejandro Parada-Mayorga, Postdoctoral Researcher, Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia
Talk Title: Algebraic Neural Networks: Stability to Deformations
Abstract: Convolutional architectures play a central role on countless scenarios in machine learning, and the numerical evidence that proves the advantages of using them is overwhelming. Theoretical insights have provided solid explanations about why such architectures work well. These analysis apparently different in nature, have been performed considering signals defined on different domains and with different notions of convolution, but with remarkable similarities in the final results, posing then the question of whether there exists an explanation for this at a more structural level. In this talk we provide an affirmative answer to this question with a first principles analysis introducing algebraic neural networks (AlgNNs), which rely on algebraic signal processing and algebraic signal models. In particular, we study the stability properties of algebraic neural networks showing that stability results for traditional CNNs, graph neural networks (GNNs), group neural networks, graphon neural networks, or any formal convolutional architecture, can be derived as particular cases of our results. This shows that stability is a universal property - at an algebraic level - of convolutional architectures, and this also explains why the remarkable similarities we find when analyzing stability for each particular type of architecture.
Biography: Alejandro Parada-Mayorga (alejopm@seas.upenn.edu) received his B.Sc. and M.Sc. degrees in electrical engineering from Universidad Industrial de Santander, Colombia, in 2009 and 2012, respectively, and his Ph.D. degree in electrical engineering from the University of Delaware, Newark, 2019. Currently, he is a postdoctoral researcher at the University of Pennsylvania, Philadelphia, under the supervision of Prof. Alejandro Ribeiro. His research interests include algebraic signal processing, algebraic neural networks, graph neural networks, graph signal processing, and applications of representation theory of algebras and category theory.
Host: Dr. Shri Narayanan, shri@ee.usc.edu
Webcast: https://usc.zoom.us/j/92088625170?pwd=enhYNUpicEYvS0R5SEViVVBobjQ1dz09Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
WebCast Link: https://usc.zoom.us/j/92088625170?pwd=enhYNUpicEYvS0R5SEViVVBobjQ1dz09
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
Center of Autonomy and AI, Center for Cyber-Physical Systems and the Internet of Things, and Ming Hsieh Institute Seminar Series
Wed, Mar 09, 2022 @ 02:00 PM - 03:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Swarat Chaudhuri, Computer Science Department, The University of Texas at Austin
Talk Title: Neurosymbolic Programming
Series: Center for Cyber-Physical Systems and Internet of Things
Abstract: I will speak about Neurosymbolic programming, an emerging research area that bridges the fields of deep learning and program synthesis. Like in classic machine learning, the goal here is to learn functions from data. However, these functions are represented as programs that can use neural modules in addition to symbolic primitives and are induced using a combination of symbolic search and gradient-based optimization. Neurosymbolic programming can offer multiple advantages over end-to-end deep learning. Programs can sometimes naturally represent long-horizon, procedural tasks that are difficult to perform using deep networks. Neurosymbolic representations are also, commonly, easier to interpret and formally verify than neural networks. The restrictions of a programming language can serve as a form of regularization and lead to more generalizable and data-efficient learning. Compositional programming abstractions can also be a natural way of reusing learned modules across learning tasks.
In the talk, I will illustrate some of the potential benefits of research in this area. I will also categorize the main ways in which symbolic and neural learning techniques come together here. I will conclude with a discussion of the open technical challenges in the field.
Biography: Swarat Chaudhuri (http://www.cs.utexas.edu/~swarat) is an Associate Professor of Computer Science and the director of the Trishul laboratory at UT Austin. His research lies at the interface of programming languages, logic, and machine learning. Through a synthesis of ideas from these areas, he seeks to develop a new generation of intelligent systems that are designed to be reliable, transparent, secure, and that can solve complex procedural tasks beyond the scope of contemporary AI.
Host: Pierluigi Nuzzo
Webcast: https://usc.zoom.us/webinar/register/WN_zyIBh_1gQLmKpMJG0GyLxwLocation: Online
WebCast Link: https://usc.zoom.us/webinar/register/WN_zyIBh_1gQLmKpMJG0GyLxw
Audiences: Everyone Is Invited
Contact: Talyia White
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor.