Select a calendar:
Filter March Events by Event Type:
Conferences, Lectures, & Seminars
Events for March
-
IISE Regional Conference
Sun, Mar 01, 2020
Viterbi School of Engineering Student Affairs
Conferences, Lectures, & Seminars
Speaker: ,
Talk Title:
Host:
Audiences: Everyone Is Invited
Contact: Viterbi Undergraduate Programs
-
CS Colloquium: Melisa Orta Martinez (Stanford University) - Design and Analysis of Open-Source Educational Haptic Devices
Mon, Mar 02, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Melisa Orta Martinez, Stanford University
Talk Title: Design and Analysis of Open-Source Educational Haptic Devices
Series: CS Colloquium
Abstract: The sense of touch (haptics) is an active perceptual system used from our earliest days to discover the world around us. However, formal education is not designed to take advantage of this sensory modality. As a result, very little is known about the effects of using haptics in K-12 and higher education or the requirements for haptic devices for educational applications. This talk will present three novel, open-source, low-cost haptic devices for educational applications and discuss some general principles for designing such devices. The first device, Hapkit is a one-degree-of-freedom kinesthetic device that has been used in several education environments, where we have discovered the potential of haptics to display abstract mathematical concepts and observed the importance of device customization for the students. The second, Haplink, introduces a novel mechanism that enables the device to transform between a one- and two-degree-of-freedom haptic device in order to enable additive learning. The third device, HapCaps is a tactile haptic device that was developed to study the connection between finger perception and math learning in young children. The aim is to design haptic devices that can be used in several educational environments in order to understand the role of haptics in learning.
This lecture satisfies requirements for CSCI 591: Research Colloquium.
Biography: Melisa Orta Martinez received the BS degree in electronic systems engineering from the Instituto Tecnologico y de Estudios Superiores de Monterrey in Mexico City, during which she spent a year working as a research intern at the Heinz Nixdorf Institute, Paderborn, Germany. She then obtained a MS degree in electrical engineering from Stanford University, Stanford, CA. After her masters degree she worked at Apple Inc. for three years in the Human Interface Devices group. She is currently working toward the doctoral degree in mechanical engineering at Stanford University. Her research interests include haptics, robotics and education.
Host: Heather Culbertson
Location: Ronald Tutor Hall of Engineering (RTH) - 109
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Warming Up the Engineering Classroom: Sharing Stories of International Students
Mon, Mar 02, 2020 @ 11:30 AM - 12:30 PM
Viterbi School of Engineering Student Affairs
Conferences, Lectures, & Seminars
Speaker: Viterbi Students, Prof. Harly Ramsey, Prof. Helen Choi,
Series: DEI Week
Abstract: At USC, one in four students is an international student. However, their diverse voices can sometimes be excluded from critical classroom discussions. To improve learning experiences for international students, Professors Harly Ramsey and Helen Choi of the Viterbi Engineering Writing Program and Viterbi students will share ideas for building more inclusive and productive classrooms.
Host: Prof. Harly Ramsey and Prof. Helen Choi
More Info: https://calendar.usc.edu/event/building_inclusive_productive_learning_communities_for_international_students
Location: Mark Taper Hall Of Humanities (THH) - 420
Audiences: Everyone Is Invited
Contact: Helen Choi
Event Link: https://calendar.usc.edu/event/building_inclusive_productive_learning_communities_for_international_students
-
Center for Cyber-Physical Systems and Internet of Things and Ming Hsieh Institute Seminar
Mon, Mar 02, 2020 @ 12:00 PM - 01:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Tuba Yavuz, Electrical and Computer Engineering Department at the University of Florida
Talk Title: Improving IoT Reliability and Security using Automated Model Extraction and Guided Analysis
Series: Center for Cyber-Physical Systems and Internet of Things
Abstract: The number of Internet of Things (IoT) devices has reached 26 billion in 2019. A typical IoT ecosystem consists of a variety of components including the cloud, mobile devices, edge devices, and constrained devices. Although each component in IoT comes with unique capabilities and challenges, the system software that runs on each type of IoT component forms an important part of the IoT attack surface. Therefore, the ability to perform precise and scalable analysis of system software and to detect deep system vulnerabilities throughout the IoT ecosystem are critical for IoT reliability and security. System software includes the firmware, operating system, device drivers, and libraries. Despite recent advances in program analysis techniques and decision procedures, the complexity of system software creates challenges in terms of scalability and precision.
In this talk, I will introduce Model Extraction and Model Guided Analysis as an approach for effective and scalable analysis of system software. The idea is to use extracted models as oracles in a client analysis, where the client analysis can become a model extraction step for another client analysis, and so on. I will present our experience with Model Extraction and Model Guided analysis in the context of USB and Bluetooth firmware and protocol stacks, Linux device drivers, cryptographic libraries, and SGX enclaves. I will specifically discuss the motivations, challenges, and our achievements using the tools and methodologies we have developed including FirmUSB, ProXray, MOXCAFE, and PROMPT. I will conclude with a vision and a roadmap for Model Extraction and Model Guided Analysis to support the reliable and secure development and evolution of IoT frameworks.
Biography: Dr. Tuba Yavuz is currently an Assistant Professor at the Electrical and Computer Engineering Department of University of Florida (UF). She is also affiliated with the Florida Institute of Cyber Security Research (FICS) andthe Nelms Institute for the Connected World at UF. She received her Ph.D. in computer science from the Computer Science Department of University of California, Santa Barbara in 2004.Her research areas include formal methods, software engineering, and system security. She has recently developed tools and techniques for detecting vulnerabilities and malicious behavior in system software.
Host: Chao Wang, wang626@usc.edu
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Talyia White
-
ECE Seminar: A Cross-Stack, Network-Centric Architectural Design for Next Generation Datacenter
Tue, Mar 03, 2020 @ 10:30 AM - 11:30 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Mohammad Alian, PhD Candidate, ECE Dept, UIUC
Talk Title: A Cross-Stack, Network-Centric Architectural Design for Next Generation Datacenter
Abstract: In the light of technology scaling and data explosion trends, the long latency and limited bandwidth of transferring data within a computer and across computers have become a key bottleneck to the improvement of performance and energy efficiency. Tacking this critical challenge, researchers have proposed various near-data processing architectures in the form of in-network and near-memory computing to move computation closer to data. In this talk, first, I introduce a technique that leverages the potentials of in-network processing for efficient power-management of network-connected computers. Then I present Memory Channel Network (MCN), a memory module based, near-memory processing architecture that seamlessly unifies near-memory processing with distributed computing for the acceleration of data-intensive applications.
Biography: Mohammad Alian is a Ph.D. candidate at the Electrical and Computer Engineering Department of the University of Illinois Urbana Champaign. His research is at the intersection of computer architecture and networking where he proposed several cross-stack, near-memory, and in-network computing architectures. His work has been published in top computer architecture and systems venues and recognized by several best paper candidacies and one honorable mention in IEEE MICRO Top Picks 2017. Mohammad holds an M.Sc. degree in computer engineering from the University of Wisconsin-Madison.
Host: Professor Murali Annavaram
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
-
CS Colloquium: Kaiyu Hang (Yale University) - Robotic Manipulation – From Representations to Actions
Tue, Mar 03, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Kaiyu Hang, Yale University
Talk Title: Robotic Manipulation -“ From Representations to Actions
Series: CS Colloquium
Abstract: Dexterous manipulation is a challenging and integral task involving a number of subproblems to be addressed, such as perception, planning, and control. Problem representation, which is an essential element in a system that defines what is actually the problem to be considered, determines both the capability of a system and the feasibility of applying such a system in real tasks.
In this talk, I will introduce how good representations can convert difficult problems into easier ones. In particular, I first discuss the development of representations for grasp optimization, as well as how a good representation can simplify and unify the whole grasping system, including globally optimal grasp planning, sensing, adaptation, and control. By expanding or varying this representation in terms of problem scenarios, I further show how it can greatly facilitate solving other problems, such as grasp-aware motion planning, optimal placement planning, and even dual-arm manipulation. Second, I will introduce our work on underactuated manipulation using soft robotic hands. For underactuated hands without any joint encoders or tactile sensors, I present our representations that can enable a robot to interact with tabletop objects using nonprehensile manipulation to finally grasp it, and show how to register the object into its own hand-object system once grasped, so as to eventually provide precise and dexterous in-hand manipulation. Finally, I discuss how to develop representations for optimizing robot fingertip designs, especially for simple grippers with limited grasping skills. By installing those optimized fingertip designs onto drones, I further show that those optimized designs can enable the drones to perch or rest at various structures, achieving significant improvement in energy consumption.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Kaiyu Hang is a postdoctoral associate working with Prof. Aaron M. Dollar at the GRAB lab, Yale University. He received his Ph.D. in Computer Science, specialized in Robotics and Computer Vision, under the supervision of Prof. Danica Kragic from KTH Royal Institute of Technology, Stockholm, Sweden. Before joining the GRAB lab, he was a research assistant professor at the Department of Computer Science and Engineering, and a Junior Fellow of the Institute for Advanced Study, Hong Kong University of Science and Technology. His research interests include representations and optimization for robotic manipulation, motion planning, adaptive grasping and in-hand manipulation, underactuated robotic hands, dual arm manipulation, and mobile manipulation.
Host: Joseph Lim
Location: Olin Hall of Engineering (OHE) - 132
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
***CANCELLED*** ISE 651 - Epstein Seminar
Tue, Mar 03, 2020 @ 03:30 PM - 04:50 PM
Daniel J. Epstein Department of Industrial and Systems Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Siqian Shen, Associate Professor, Dept. of Industrial and Operations Engineering - University of Michigan
Talk Title: Stochastic and Distributionally Robust Optimization Approaches for Improving Shared-Mobility System Design and Operations
Host: Prof. Suvrajeet Sen
More Information: March 3, 2020.pdf
Location: Ethel Percy Andrus Gerontology Center (GER) - 206
Audiences: Everyone Is Invited
Contact: Grace Owh
-
Medical Imaging Seminar
Tue, Mar 03, 2020 @ 03:30 PM - 04:30 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Christian Pichot, Université Côte d'Azur, CNRS, LEAT, SophiaTech Campus
Talk Title: Microwave Tomographic Imaging of Brain Strokes
Series: Medical Imaging Seminar Series
Abstract: Brain strokes are one of the leading causes of disability and mortality in adults in developed countries. Ischemic stroke (85% of total cases) and hemorrhagic stroke (15%) must be treated with opposing therapies, and thus, the nature of the stroke must be determined quickly in order to apply the appropriate treatment. Recent studies in biomedical imaging have shown that strokes produce variations in the complex electric permittivity of brain tissues, which can be detected by means of microwave tomography. Here, we present some synthetic results obtained with an experimental microwave tomography-based portable system for demonstrating the feasibility of such a new imaging modality for the early detection and monitoring of brain strokes. The determination of electric permittivity requires the solution of a coupled forward-inverse problem. Iterative microwave tomographic imaging requires the solution of an inverse problem based on a minimization algorithm (e.g. gradient based) with successive solutions of a direct problem such as the accurate modeling of a whole-microwave measurement system. Synthetic data are obtained with electromagnetic simulations, which have been derived from measurements of an experimental microwave imaging system developed by EMTensor GmbH (Vienna, Austria). Results demonstrate the possibility to detect brain strokes, as well as for monitoring during the treatment, with a microwave system with reasonable running times for image reconstructions when applying the proposed reconstruction algorithm using state-of-the-art numerical modeling and massively parallel computing.
Biography: Christian Pichot is currently a Researcher Emeritus at the French National Center for Scientific Research (CNRS), at the Electronics, Antennas & Telecommunications Laboratory (LEAT), a joint Université Côte d'Azur and CNRS laboratory, 06900 Sophia Antipolis, France.He received the Ph.D. and the Doctor of Science (D.Sc.) degrees from the University of Paris-Sud 11 in 1977 and 1982, respectively.
From 2000 to 2011, he was the Director of the LEAT. From 2008 to 2013, he was the co-founder and co-director of CREMANT, a joint Antenna Research Center, supported by the University of Nice-Sophia Antipolis, CNRS and France Telecom Orange Labs. He received in 1983 the European Microwave Prize. He is an IEEE Fellow for "Contributions to Microwave Imaging and Antenna Design". He received the Medal of Honor of CNRS in 2018, and the Academy of Sciences URSI-France Medal in 2019.
His research activities are concerned with scattering and propagation of Electromagnetic Waves,radiation of antennas, inverse scattering (Microwave Imaging and Tomography, AntennaSynthesis, Complex Permittivity Reconstruction, Object Detection and Recognition) for applications in Radar, Civil engineering, non-destructive evaluation (NDE), non-destructive testing (NDT), geophysics engineering, security and military applications, antennas, telecommunications, and medical domain (biomedical engineering), VLF/LF frequencies, microwaves and millimetre waves.
Host: Krishna Nayak, knayak@usc.edu
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Talyia White
-
CS Distinguished Lecture: Oren Etzioni (Allen Institute for AI) - Artificial Intelligence and the Future of Humanity
Tue, Mar 03, 2020 @ 04:00 PM - 05:20 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Oren Etzioni, Allen Institute for AI
Talk Title: Artificial Intelligence and the Future of Humanity
Series: Computer Science Distinguished Lecture Series
Abstract: Could we wake up one morning to find that AI is poised to take over the world? Is AI the technology of unfairness and bias?
My talk will assess these concerns, and sketch a more optimistic view.
We will have ample warning before the emergence of superintelligence, and in the meantime we have the opportunity to create
Beneficial AI:
• AI that mitigates bias rather than amplifying it
• AI that saves lives rather than taking them
• AI that helps us to solve humanity's thorniest problems
My talk builds on work at the Allen Institute for AI, a non-profit research institute based in Seattle.
This lecture satisfies requirements for CSCI 591: Research Colloquium.
Biography: Oren Etzioni launched the Allen Institute for AI, and has served as its CEO since 2014.
He has been a Professor at the University of Washington's Computer Science department since 1991, publishing papers that have garnered over 2,300 highly influential citations on Semantic Scholar.
He is also the founder of several startups including Farecast (acquired by Microsoft in 2008).
Host: Craig Knoblock and Xiang Ren
Location: Henry Salvatori Computer Science Center (SAL) - 101
Audiences: Everyone Is Invited
Contact: Computer Science Department
-
CS Colloquium: Peng Qi (Stanford University) - Explainable and Efficient Knowledge Acquisition from Text
Wed, Mar 04, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Peng Qi, Stanford University
Talk Title: Explainable and Efficient Knowledge Acquisition from Text
Series: CS Colloquium
Abstract: Human languages have served as the media for our knowledge over generations. With the rise of the digital world, making use of the knowledge that is encoded in text has become unprecedentedly important yet challenging. In recent years, the NLP community has made great progress towards operationalizing textual knowledge by building accurate systems that answer factoid questions. However, largely relying on matching local text patterns, these systems fall short at their ability to perform complex reasoning, which limits our effective use of textual knowledge. To address this problem, I will first talk about two distinct approaches to enable NLP systems to perform multi-step reasoning that is explainable to humans, through extracting facts from natural language and answering multi-step questions directly from text. I will then demonstrate that beyond static question answering with factoids, true informativeness of answers stems from communication. To this end, I will show how we lay the foundation for reasoning about latent information needs in conversations to effectively exchange information beyond providing factoid answers.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Peng Qi is a Computer Science PhD student at Stanford University. His research interests revolve around building natural language processing systems that better bridge between humans and the large amount of textual information we are engulfed in. He is excited about building scalable and explainable AI systems, and has worked on extracting knowledge representations from text, question answering involving complex reasoning, and multi-lingual NLP.
Host: Xiang Ren
Location: Ronald Tutor Hall of Engineering (RTH) - 109
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Center for Cyber-Physical Systems and Internet of Things and Ming Hsieh Institute Seminar
Wed, Mar 04, 2020 @ 02:00 PM - 03:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Jonathan Sprinkle, University of Arizona
Talk Title: Cyber-Physical Systems for Vehicle-in-the-Flow Traffic Flow Control
Series: Center for Cyber-Physical Systems and Internet of Things
Abstract: This talk describes previous and ongoing research in traffic flow control that involve the University of Arizona CAT Vehicle Testbed. The focus of the research is real-time control of vehicle velocity in order to effect the velocity of other vehicles in the flow. Research and results are told through the lens of several physical validation experiments. The first experiment explores how to dampen emerging waves in traffic that are due to congestive effects. This experiment grew out of theory of how traffic flow could be improved through sparse velocity control (e.g., ~5% of the vehicles) in the flow. The second experiment examines an analogous case, where 100% of the vehicles are controlled, though this time using off-the-shelf (rather than customized) cruise control algorithms. The talk will examine the hypotheses, methods, and results of these experiments, and explore the theory and motivation for the research as a means to provide insights into the obtained results. The research was sponsored by the National Science Foundation under award CNS-1446435, the Department of Energy through contract DE-EE0008872, and is collaborative work with Benedetto Piccoli, Benjamin Seibold, Dan Work, and Alexandre Bayen.
Biography: Dr. Jonathan Sprinkle is the Litton Industries John M. Leonis Distinguished Associate Professor of Electrical and Computer Engineering at the University of Arizona. In 2013 he received the NSF CAREER award, and in 2009, he received the UA's Ed and Joan Biggers Faculty Support Grant for work in autonomous systems. His work has an emphasis for industry impact, and he was recognized with the UA "Catapult Award" by Tech Launch Arizona in 2014, and in 2012 his team won the NSF I-Corps Best Team award. From 2017-2019 he served as a Program Director at the National Science Foundation in the division of Computer and Networked Systems. His research interests and experience are in cyber-physical systems control and engineering, and he teaches courses ranging from systems modeling and control to mobile application development and software engineering.
Host: Paul Bogdan, pbogdan@usc.edu
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Talyia White
-
AME Seminar
Wed, Mar 04, 2020 @ 03:30 PM - 04:30 PM
Aerospace and Mechanical Engineering
Conferences, Lectures, & Seminars
Speaker: Greg Ver Steeg, USC
Talk Title: Challenges and Opportunities for Accelerating Scientific Discovery with Deep Learning
Abstract: The successes of neural networks in computer vision and natural language processing have not easily translated into breakthroughs in other scientific domains. I will discuss some of the principles behind learning representations of data with deep learning and how we have adapted these ideas to study problems like gene expression, neuroimaging, and clinical health records. I will conclude with a speculative discussion about whether these methods can benefit domains that traditionally rely on large-scale numerical simulations like computational fluid dynamics.
Biography: Dr. Greg Ver Steeg is a Research Lead at ISI and Research Associate Professor in USCs CS department. He has slowly transitioned from PhD research at Caltech on detecting quantum entanglement to his current work on detecting hidden variables in more diverse domains using information theory and machine learning. His work has been recognized with an AFOSR Young Investigator Award and an Amazon Research Award.
Host: AME Department
More Info: https://ame.usc.edu/seminars/
Location: James H. Zumberge Hall Of Science (ZHS) - 159
Audiences: Everyone Is Invited
Contact: Tessa Yao
Event Link: https://ame.usc.edu/seminars/
-
CS Colloquium: Emma Pierson (Stanford) - Data Science Methods to Reduce Inequality and Improve Healthcare
Thu, Mar 05, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Emma Pierson, Stanford University
Talk Title: Data Science Methods to Reduce Inequality and Improve Healthcare
Series: CS Colloquium
Abstract: I will describe how to use data science methods to understand and reduce inequality in two domains: criminal justice and healthcare. First, I will discuss how to use Bayesian modeling to detect racial discrimination in policing. Second, I will describe how to use machine learning to explain racial and socioeconomic inequality in pain.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Emma Pierson is a PhD student in Computer Science at Stanford, supported by Hertz and NDSEG Fellowships. Previously, she completed a master's degree in statistics at Oxford on a Rhodes Scholarship. She develops statistical and machine learning methods to study two deeply entwined problems: reducing inequality and improving healthcare. She also writes about these topics for broader audiences in publications including The New York Times, The Washington Post, FiveThirtyEight, and Wired. Her work has been recognized by best paper (AISTATS 2018), best poster (ICML Workshop on Computational Biology), and best talk (ISMB High Throughput Sequencing Workshop) awards, and she has been named a Rising Star in EECS and Forbes 30 Under 30 in Science.
Host: Bistra Dilkina
Location: Olin Hall of Engineering (OHE) - 132
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Xiaonan Hui - ECE-EP Seminar, Thursday, March 5th at 11am in EEB 248
Thu, Mar 05, 2020 @ 11:00 AM - 12:30 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Xiaonan Hui, Cornell University
Talk Title: Harmonic RF sensing from indoor localization to vital signs monitoring
Abstract: When wireless is perfectly applied, the whole earth will be converted into a huge brain, which in fact it is all things being particles of a real and rhythmic whole." For almost a century, electrical engineers are endeavoring to approach what Nikola Tesla predicted in 1926 for a "World Wireless System". However, as of today, many hurdles remain when we think of all things connected rhythmically with interaction and links between the cyber and the physical worlds, because sensing of the "things", especially "living things", is still heavily constrained. The location and shape of objects, as well as the vital signs of people and animals are critical information to the overall systems. In this talk, I will first highlight our solutions of highly reliable and accurate indoor RF ranging, localization and imaging. The demonstrated radio frequency (RF) localization method bypasses the Uncertainty-Principle mathematical model commonly seen in the radar-like system, so that the high temporal (kHz) and spatial (microns) resolutions can be achieved simultaneously with ~915 MHz signals which have deep penetration to many dielectrics of interests such as building materials and living tissues. Vital-sign monitoring is the second part of the talk, including the heartbeat dynamics, respiration, and blood pressures of both central and pulmonary circulations, with the new near-field coherent sensing (NCS) approach, which not only provides unparalleled RF vital-sign signal quality and sensing capability, but also does not require skin touch or motion restraint to greatly improve the applicability to people and animals. The systems in this talk can be implemented in the applications of high precision indoor locating, assisted living, RF bio-tomography, biometrics for security, wearable sensors, and clinical researches. The talk will include the supporting RF theory, the design methods and the hardware/software experimental system, but its content will be aimed for the general audience in engineering.
Biography: Xiaonan Hui is a Ph.D. candidate in the School of Electrical and Computer Engineering at Cornell University. He works with Prof. Edwin Kan and focuses on radio-frequency systems for Cyber Physical System (CPS) and Internet of Things (IoT) applications. His recent works on vital-sign acquisition for people as well as animals were published on high-impact journals and conferences, attracting not only more than 30 news agencies, but also generating broad industrial interests for automotive, medical, pharmaceutical, and digital agricultural applications. Moreover, his high-precision indoor localization works provide an innovative method for IoT tracking, robotic localization, and civil structure integrity monitoring. He is the principal investigator of Cornell Scale-up and Prototype Grants, the winner of Cornell ECE Outstanding Ph.D. Thesis Research Award, and other 3 fellowships. He also serves as the reviewer for Springer Nature, IEEE journals and conferences in the areas of electromagnetic systems, vital-sign sensing and wireless communications. More of his academic information can be found in his website: www.xiaonanhui.com
Host: ECE-Electrophysics
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Marilyn Poplawski
-
Center for Cyber-Physical Systems and Internet of Things and Ming Hsieh Institute Seminar
Thu, Mar 05, 2020 @ 02:00 PM - 03:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Frederic Sala, Stanford Computer Science Department
Talk Title: Structure to the Rescue: Breaking Data Barriers in Machine Lear
Series: Center for Cyber-Physical Systems and Internet of Things
Abstract: The current machine learning zeitgeist is that models are only as good as the data they are fed, so that limitations in the data---and especially mismatches with the ML algorithm---present fundamental barriers to model performance. However, for ML to continue its growth and be safely and widely deployed across domains with significant societal impact, such limitations must be minimized. In this talk, I will describe two ways to exploit structure in data to overcome apparent obstacles, with theoretical guarantees.
First, I will argue that geometry is a barrier to producing quality representations used by models. The root cause is a mismatch between the geometric structure of the data and the geometry of the model---but the issue can be resolved by adopting matching non-Euclidean geometries, relying on, for example, hyperbolic geometry for hierarchical data. Next, motivated by the fact that labeling large datasets is a major bottleneck in supervised learning, I will discuss a weak supervision framework for automating the process of labeling, overcoming the lack of hand-labeled data. This is done by encapsulating different aspects of manual labeling into heuristics whose structure is characterized by learnable accuracies and correlations. I will describe extensions of this framework to handle multitask, time-series, and other forms of structured data. This framework is widely used in industry, helping drive applications used by millions daily.
Biography: Frederic Sala is a postdoctoral scholar in the Stanford Computer Science Department, advised by Chris Ré. His research interests include machine learning, data-driven systems, and information and coding theory, and in particular problems related to the analysis and design of algorithms that operate on diverse and challenging forms of data. He received the Ph.D. and M.S. degrees in Electrical Engineering from UCLA, where he received the Distinguished Ph.D. Dissertation in Signals & Systems Award from the UCLA Electrical Engineering Department, the NSF graduate fellowship, and the Edward K. Rice Outstanding Master's Student Award.
Host: Paul Bogdan, pbogdan@usc.edu
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Talyia White
-
ECE Seminar: Collaborative Perception and Learning Between Robots and the Cloud
Thu, Mar 05, 2020 @ 02:15 PM - 03:15 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Sandeep Chinchali, PhD Candidate, Dept of CS, Stanford University
Talk Title: Collaborative Perception and Learning Between Robots and the Cloud
Abstract: Augmenting robotic intelligence with cloud connectivity is considered one of the most promising solutions to cope with growing volumes of rich robotic sensory data and increasingly complex perception and decision-making tasks. While the benefits of cloud robotics have been envisioned long before, there is still a lack of flexible methods to trade-off the benefits of cloud computing with end-to-end systems costs of network delay, cloud storage, human annotation time, and cloud-computing time. To address this need, I will introduce decision-theoretic algorithms that allow robots to significantly transcend their on-board perception capabilities by using cloud computing, but in a low-cost, fault-tolerant manner.
Specifically, for compute-and-power-limited robots, I will present a lightweight model selection algorithm that learns when a robot should exploit low-latency on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, I will present a collaborative learning algorithm that allows a diversity of robots to mine their real-time sensory streams for valuable training examples to send to the cloud for model improvement. The utility of these algorithms will be demonstrated on months of field data and experiments on state-of-the-art embedded deep learning hardware. I will conclude this talk by outlining a number of future research directions on the systems and theoretical aspects of networked system control, some of which extend beyond cloud robotics.
Biography: Sandeep Chinchali is a computer science PhD candidate at Stanford, advised by Sachin Katti and Marco Pavone. Previously, he was the first principal data scientist at Uhana, a Stanford startup working on data-driven optimization of cellular networks, now acquired by VMWare. His research on networked control has led to proof-of-concept trials with major cellular network operators and was a finalist for best student paper at Robotics: Science and Systems 2019. Prior to Stanford, he graduated from Caltech, where he worked on robotics at NASA's Jet Propulsion Lab (JPL). He is a recipient of the Stanford Graduate Fellowship and National Science Foundation (NSF) fellowships.
Host: Host: Professor Konstantinos Psounis
Location: Michelson Center for Convergent Bioscience (MCB) - 102
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
-
Sonny Astani Civil and Environmental Engineering Seminar
Thu, Mar 05, 2020 @ 04:00 PM - 05:00 PM
Sonny Astani Department of Civil and Environmental Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Francesca Boso, Stanford University
Talk Title: Data and probabilistic forecasting in environmental applications
Abstract: Mathematical models expressing conservation of certain quantities (e.g. mass) are ubiquitous in the environmental sciences. A common challenge is often the lack of enough observations to inform these models, either because data collection is costly or impractical/impossible at the required level of spatial and temporal refinement. We propose a computational tool to treat the parametric uncertainty of these models, leveraging the inherent physical constraints, and combining them with data. Specifically, we quantify the impact of parametric uncertainty by deriving model-dependent deterministic equations for the probability distribution (Probability Density Function, PDF, or Cumulative Distribution Function, CDF) of the model solution. These equations can be derived in exact form for a class of nonlinear hyperbolic governing laws (e.g. advection-dominated transport in heterogeneous flows), whereas in general they require the development of ad-hoc closures. I will be presenting an overview of strategies to obtain workable PDF-CDF equations for specific conservation problems, and some recent work on how to combine them with available data to eventually reduce uncertainty.
Biography: Francesca is a senior research scientist in the Energy Resources Engineering Department at Stanford University, following her postdoc at the University of California, San Diego. She received her PhD in Environmental Engineering from the University of Trento, Italy, specializing in hydrology. She has been investigating uncertainty quantification for environmental applications.
Host: Dr. Felipe de Barros
Location: Michelson Center for Convergent Bioscience (MCB) - 102
Audiences: Everyone Is Invited
Contact: Evangeline Reyes
-
Medical Imaging Seminar
Fri, Mar 06, 2020 @ 10:00 AM - 11:00 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Rudolf Stollberger, Graz University of Technology, Institute of Medical Engineering
Talk Title: Variational Reconstruction of Highly Undersampled 3D Multiple Frame Acquisitions
Series: Medical Imaging Seminar Series
Abstract: Time dependent or quantitative multiple frame acquisitions are particular well suited for the combination of accelerated acquisition and sophisticated iterative reconstruction techniques with spatial-temporal regularization or model based approaches. In this presentation the potential of variational reconstruction for dynamic MRI, for ASL and for model based quantification is explored. Although the applications are quite different, some basic principles are common to all.
For dynamic data iterative reconstruction with infimal convolution of total generalized variation (ICTGV) functionals has shown to allow temporal resolution below 1s for 3D measurements with 40 slices (3202) with excellent suppression of sub-sampling artifacts. This approach will be compared with a variational network for dynamic multi-coil cardiac data. Another example exists for accelerated time encoded CAIPIRINHA ASL data. For this application, the whole brain can be acquired within a single shot which increases the robustness against motion compared to standard segmented acquisition. A third application area consists in quantitative MRI. Model based reconstruction allow the determination of 3D isotropic T1 maps (1mm3) with an acquisition time of 1.8-“1.1 s/slice for the variable flip angle method (VFA). The variational techniques can process 4D array coil data, which is still a challenge for DL-based approaches. Reconstruction times start at about 4 minutes for 4D-ASL data and are somewhat longer for dynamic MRI, but can be many times longer for model-based reconstruction of 4D qMRI data with a nonlinear signal model like VFA.
Host: Krishna Nayak, knayak@usc.edu
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Talyia White
-
CS Colloquium: Lili Su (MIT) - Learning with Distributed Systems: Adversary-Resilience and Neural Networks
Mon, Mar 09, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Lili Su, MIT
Talk Title: Learning with Distributed Systems: Adversary-Resilience and Neural Networks
Series: CS Colloquium
Abstract: In this talk, I will first talk about how to secure Federated Learning (FL) against adversarial faults.
FL is a new distributed learning paradigm proposed by Google. The goal of FL is to enable the cloud (i.e., the learner) to train a model without collecting the training data from users' mobile devices. Compared with traditional learning, FL suffers serious security issues and several practical constraints call for new security strategies. Towards quantitative and systematic insights into the impacts of those security issues, we formulated and studied the problem of Byzantine-resilient Federated Learning. We proposed two robust learning rules that secure gradient descent against Byzantine faults. The estimation error achieved under our more recently proposed rule is order-optimal in the minimax sense.
Then, I will briefly talk about our recent results on neural networks, including both biological and artificial neural networks. Notably, our results on the artificial neural networks (i.e., training over-parameterized 2-layer neural networks) improved the state-of-the-art. In particular, we showed that nearly-linear network over-parameterization is sufficient for the global convergence of gradient descent.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Lili Su is a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, hosted by Professor Nancy Lynch. She received a Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign in 2017, supervised by Professor Nitin H. Vaidya. Her research intersects distributed systems, learning, security, and brain computing. She was the runner-up for the Best Student Paper Award at DISC 2016, and she received the 2015 Best Student Paper Award at SSS 2015. She received UIUC's Sundaram Seshu International Student Fellowship for 2016, and was invited to participate in Rising Stars in EECS (2018). She has served on TPC for several conferences including ICDCS and ICDCN.
Host: Leana Golubchik
Location: Ronald Tutor Hall of Engineering (RTH) - 109
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Spring 2020 Joint CSC@USC/CommNetS-MHI Seminar Series
Mon, Mar 09, 2020 @ 02:00 PM - 03:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Bruno Ribeiro, Purdue University
Talk Title: Unearthing the relationship between graph neural networks and matrix factorization
Abstract: Graph tasks are ubiquitous, with applications ranging from recommendation systems, to language understanding, to automation with environmental awareness and molecular synthesis. A fundamental challenge in applying machine learning to these tasks has been encoding (representing) the graph structure in a way that ML models can easily exploit the relational information in the graph, including node and edge features. Until recently, this encoding has been performed by factor models (a.k.a. matrix factorization embeddings), which arguably originated in 1904 with Spearman's common factors. Recently, however, graph neural networks have introduced a new powerful way to encode graphs for machine learning models. In my talk, I will describe these two approaches and then introduce a unifying mathematical framework using group theory and causality that connects them. Using this novel framework, I will introduce new practical guidelines to generating and using node embeddings and graph representations, which fixes significant shortcomings of the standard operating procedures used today.
Biography: Bruno Ribeiro is an Assistant Professor in the Department of Computer Science at Purdue University. He obtained his Ph.D. at the University of Massachusetts Amherst and did his postdoctoral studies at Carnegie Mellon University from 2013-2015. His research interests are in representation learning and data mining, with a focus on sampling and modeling relational and temporal data. He received an NSF CAREER award in 2020 and the ACM SIGMETRICS best paper award in 2016.
Host: Prof. Antonio Ortega, aortega@usc.edu
More Info: http://csc.usc.edu/seminars/2020Spring/ribeiro.html
More Information: 200309_Bruno Ribeiro_CSC Seminar.pdf
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Brienne Moore
Event Link: http://csc.usc.edu/seminars/2020Spring/ribeiro.html
-
ECE Seminar: Software-Hardware Systems for the Internet of Things
Tue, Mar 10, 2020 @ 10:45 AM - 11:45 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Professor Omid Abari, School of Computer Science, University of Waterloo
Talk Title: Software-Hardware Systems for the Internet of Things
Abstract: Recently, there has been a huge interest in Internet of Things (IoT) systems, which bring the digital world into the physical world around us. However, barriers still remain to realizing the dream applications of IoT. One of the biggest challenges in building IoT systems is the huge diversity of their demands and constraints (size, energy, latency, throughput, etc.). For example, virtual reality and gaming applications require multiple gigabits-per-second throughput and millisecond latency. Tiny sensors spread around a greenhouse or smart home must be low-cost and batteryless to be sustainable in the long run. Today's networking technologies fall short in supporting these IoT applications with a hugely diverse set of constraints and demands. As such, they require distinct innovative solutions. In this talk, I will describe how we can design a new class of networking technologies for IoT by designing software and hardware jointly, with an understanding of the intended application. In particular, I will present two examples of our solutions. The first solution tackles the throughput limitations of existing IoT networks by developing new millimeter wave devices and protocols, enabling many new IoT applications, such as untethered high-quality virtual reality. The second solution tackles the energy limitations of IoT networks by introducing new wireless devices that can sense and communicate without requiring any batteries. I demonstrate how our solution is applicable in multiple, diverse domains such as HCI, medical, and agriculture. I will conclude the talk with future directions in IoT research, both in terms of technologies and applications.
Biography: Omid Abari is an Assistant Professor at the University of Waterloo, School of Computer Science. He received his Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT) in 2018. His research interests are in the area of computer networks and mobile systems, with applications to the Internet of Things (IoT). He is currently leading the Intelligent Connectivity (ICON) Lab, where his team focuses on the design and implementation of novel software-hardware systems that deliver ubiquitous sensing, communication and computing at scale. His work has been selected for GetMobile research highlights (2018, 2019), and been featured by several media outlets, including Wired, TechCrunch, Engadget, IEEE Spectrum, and ACM Tech News.
Host: Professor Konstantinos Psounis
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
-
CS Colloquium: Antoine Bosselut (University of Washington) - Neuro-symbolic Representations for Commonsense Knowledge and Reasoning
Tue, Mar 10, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Antoine Bosselut, University of Washington
Talk Title: Neuro-symbolic Representations for Commonsense Knowledge and Reasoning
Series: CS Distinguished Lectures
Abstract: Situations described using natural language are richer than what humans explicitly communicate. For example, the sentence "She pumped her fist" connotes many potential auspicious causes. For machines to understand natural language, they must be able to reason about the commonsense inferences that underlie explicitly stated information. In this talk, I will present work on combining traditional symbolic knowledge and reasoning techniques with modern neural representations to endow machines with these capacities.
First, I will describe COMET, an approach for learning commonsense knowledge about unlimited situations and concepts using transfer learning from language to knowledge. Second, I will demonstrate how these neural knowledge representations can dynamically construct symbolic graphs of contextual commonsense knowledge, and how these graphs can be used for interpretable, generalized reasoning. Finally, I will discuss current and future research directions on conceptualizing NLP as commonsense simulation, and the impact of this framing on challenging open-ended tasks such as story generation.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Antoine Bosselut is a PhD Student at the University of Washington advised by Professor Yejin Choi, and a student researcher at the Allen Institute for Artificial Intelligence. His research focuses on building systems for commonsense knowledge representation and reasoning that combine the strengths of modern neural and traditional symbolic methods. He was also a student researcher on the Deep Learning team at Microsoft Research from 2017 to 2018. He is supported by an AI2 Key Scientific Challenges award.
Host: Xiang Ren
Location: Olin Hall of Engineering (OHE) - 132
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Microwave Inverse Imaging Meets Deep Learning
Tue, Mar 10, 2020 @ 11:00 AM - 12:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Uday Khankhoje, Electrical Engineering at the Indian Institute of Technology Madras
Talk Title: Microwave Inverse Imaging Meets Deep Learning
Abstract: In this talk, I will start by motivating the area of inverse microwave imaging -- an area that brings together electromagnetics, signal processing, and data analytics. The objective here is to infer the electrical properties of an object by studying how it scatters electromagnetic fields -- all without making contact, i.e. remotely. The applications are diverse, from breast cancer imaging to microwave remote sensing. At the heart of this problem lies a challenging ill-posed nonlinear optimization problem. I will describe some of the contemporary methods of solving this problem and highlight the challenges faced. Subsequently, I will present some of our recent methods and results, where we have significantly pushed the state of the art by incorporating deep neural networks into existing physics-based algorithms.
Biography: Uday Khankhoje is an Assistant Professor of Electrical Engineering at the Indian Institute of Technology Madras, Chennai, India, since 2016. He received a B.Tech. degree from the Indian Institute of Technology Bombay, Mumbai, India, in 2005, an M.S. and PhD. degrees from the California Institute of Technology (Caltech), Pasadena, USA, in 2010, all in Electrical Engineering. He was a Caltech Postdoctoral Scholar at the Jet Propulsion Laboratory (NASA/Caltech) from 2011-2012, a Postdoctoral Research Associate in the Department of Electrical Engineering at the University of Southern California, Los Angeles, USA, from 2012-2013, and an Assistant Professor of Electrical Engineering at the Indian Institute of Technology Delhi from 2013-2016. His research interests are in the area of computational electromagnetics and its applications to remote sensing and inverse imaging.
Host: Prof. Constantine Sideris, csideris@usc.edu
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Talyia White
-
ISE 651 - Epstein Seminar
Tue, Mar 10, 2020 @ 03:30 PM - 04:50 PM
Daniel J. Epstein Department of Industrial and Systems Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Satish Kumar Thittamaranahalli (T. K. Satish Kumar) , USC ISI
Talk Title: Compiling Weighted Constraint Satisfaction Problems to Minimum Weighted Vertex Cover Problems
Host: Prof. Maged Dessouky
More Information: March 10, 2020.pdf
Location: Ethel Percy Andrus Gerontology Center (GER) - 206
Audiences: Everyone Is Invited
Contact: Grace Owh
-
ECE Seminar: Compiler and Runtime Systems for Homomorphic Encryption and Graph Analytics
Wed, Mar 11, 2020 @ 10:45 AM - 11:45 AM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Roshan Dathathri, PhD candidate, Dept of CS, University of Texas at Austin
Talk Title: Compiler and Runtime Systems for Homomorphic Encryption and Graph Analytics
Abstract: Distributed and heterogeneous architectures are tedious to program because devices such as CPUs, GPUs, FPGAs, and TPUs provide different programming abstractions and may have disjoint memories, even if they are on the same machine. In this talk, I present compiler and runtime systems that make it easier to develop efficient programs for privacy-preserving computation and graph analytics applications on such architectures.
Fully Homomorphic Encryption (FHE) refers to a set of encryption schemes that allow computations on encrypted data without requiring a secret key. Recent cryptographic advances have pushed FHE into the realm of practical applications. However, programming these applications remains a huge challenge, as it requires cryptographic domain expertise to ensure correctness, security, and performance. I present CHET, a domain-specific optimizing compiler, that is designed to make the task of programming neural network inference applications using FHE easier. CHET automates many laborious and error prone programming tasks including encryption parameter selection to guarantee security and accuracy of the computation, determining efficient data layouts, and performing scheme-specific optimizations. Our evaluation of CHET on a collection of popular neural networks shows that CHET-generated programs outperform expert-tuned ones by an order of magnitude.
Applications in several areas like machine learning, bioinformatics, and security need to process and analyze very large graphs. Distributed clusters are essential in processing such graphs in reasonable time. I present a novel approach to building distributed graph analytics systems that exploits heterogeneity in processor types, partitioning policies, and programming models. The key to this approach is Gluon, a domain-specific communication-optimizing substrate. Programmers write applications in a shared-memory programming system of their choice and interface these applications with Gluon using a lightweight API. Gluon enables these programs to run on heterogeneous clusters and optimizes communication in a novel way by exploiting structural and temporal invariants of graph partitioning policies. Systems built using Gluon outperform previous state-of-the-art systems and scale well up to 256 CPUs and 64 GPUs.
Biography: Roshan is a Ph.D. candidate advised by Prof. Keshav Pingali in the University of Texas at Austin. He works on domain-specific programming languages, compilers, and runtime systems that make it easy to develop efficient sparse computation and privacy-preserving computation on large-scale distributed clusters, while utilizing heterogeneous architectures. He has built programming systems for distributed and heterogeneous graph analytics and privacy-preserving neural network inferencing. He received his masters from Indian Institute of Science advised by Prof. Uday Bondhugula, where he worked on automatic parallelization of affine loop nests for distributed and heterogeneous architectures.
Host: Professor Massoud Pedram
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
-
CS Colloquium: Jesse Thomason (University of Washington) - Language Grounding with Robots
Wed, Mar 11, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Jesse Thomason, University of Washington
Talk Title: Language Grounding with Robots
Series: CS Colloquium
Abstract: We use language to refer to objects like "toast", "plate", and "table" and to communicate requests such as "Could you make breakfast?" In this talk, I will present work on computational methods to tie language to physical, grounded meaning. Robots are an ideal platform for such work because they can perceive and interact with the world. I will discuss dialog and learning strategies I have developed to enable robots to learn from their human partners, similar to how people learn from one another through interaction. I will present methods enabling robots to understand language referring expressions like "the heavy, metallic mug", the first work showing that it is possible to learn to connect words to their perceptual properties in the visual, tactile, and auditory senses of a physical robot. I will also present benchmarks and models for translating high-level human language like "put the toast on the table" that imply latent, intermediate goals into executable sequences of agent actions with the help of low-level, step-by-step language instructions. Finally, I will discuss how my work in grounded language contributes to NLP, robotics, and the broader goals of the AI community.
This lecture satisfies requirements for CSCI 591: Research Colloquium.
Biography: Jesse Thomason is a postdoctoral researcher at the University of Washington working with Luke Zettlemoyer. He received his PhD from the University of Texas at Austin with Raymond Mooney. His research focuses on language grounding and natural language processing applications for robotics (RoboNLP). Key to this work is using dialog with humans to facilitate both robot task execution and learning to enable lifelong improvement of robots' language understanding capabilities. He has worked to encourage and promote work in RoboNLP through workshop organization at both NLP and robotics conference venues.
Host: Stefanos Nikolaidis
Location: Ronald Tutor Hall of Engineering (RTH) - 109
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
AME Laufer Lecture - CANCELLED
Wed, Mar 11, 2020 @ 12:00 PM - 02:00 PM
Aerospace and Mechanical Engineering
Conferences, Lectures, & Seminars
Speaker: Howard A. Stone, Princeton
Abstract: This event has been cancelled.
Host: AME Department
More Info: https://ame.usc.edu/seminars/
Audiences: Everyone Is Invited
Contact: Tessa Yao
Event Link: https://ame.usc.edu/seminars/
-
*CANCELLED* CAIS Seminar: Rediet Abebe (Harvard University) - Mechanism Design for Social Good
Wed, Mar 11, 2020 @ 04:15 PM - 05:15 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Rediet Abebe, Harvard University
Talk Title: Mechanism Design for Social Good
Series: USC Center for Artificial Intelligence in Society (CAIS) Seminar Series
Abstract: Algorithmic and artificial intelligence techniques show immense potential to deepen our understanding of socioeconomic inequality and inform interventions designed to improve access to opportunity. Interventions aimed at historically under-served communities are made particularly challenging by the fact that disadvantage and inequality are multifaceted, notoriously difficult to measure, and reinforced by feedback loops in underlying structures.
In this talk, we develop algorithmic and computational techniques to address these issues through two types of interventions: one in the form of allocating scarce societal resources and another in the form of improving access to information. We examine the ways in which techniques from algorithms, discrete optimization, and network and computational science can combat different forms of disadvantage, including susceptibility to income shocks, social segregation, and disparities in access to health information. We discuss current practice and policy informed by this work and close with a discussion of an emerging research area -- Mechanism Design for Social Good (MD4SG) -- around the use of algorithms, optimization, and mechanism design to address this category of problems.
This lecture satisfies requirements for CSCI 591: Research Colloquium.
Biography: Rediet Abebe is a Junior Fellow at the Harvard Society of Fellows and will be receiving her Ph.D. in computer science from Cornell University in 2019. Her research is broadly in the fields of algorithms and AI, with a focus on equity and social good concerns. As part of this research agenda, she co-founded Mechanism Design for Social Good (MD4SG), a multi-institutional, interdisciplinary research initiative working to improve access to opportunity for historically disadvantaged communities. This initiative has active participants from over 100 institutions in 20 countries and has been supported by Schmidt Futures, the MacArthur Foundation, and the Institute for New Economic Thinking.
Abebe currently serves on the NIH Advisory Committee to the Director Working Group on AI, tasked with developing a comprehensive report to the NIH leadership. She was recently named one of 35 Innovators Under 35 by the MIT Technology Review and honored in the 2019 Bloomberg 50 list as a "one to watch." Her work has been covered by outlets including Forbes, the Boston Globe, and the Washington Post. In addition to her research, she also co-founded Black in AI, a non-profit organization tackling diversity and inclusion issues in AI. Her research is deeply influenced by her upbringing in her hometown of Addis Ababa, Ethiopia.
Host: USC Center for Artificial Intelligence in Society (CAIS)
Location: Henry Salvatori Computer Science Center (SAL) - 101
Audiences: Everyone Is Invited
Contact: Computer Science Department
-
*CANCELLED* CAIS Seminar: Meredith Gore - Wildlife Trafficking in the Anthropocene: Conservation, Crime & Communities
Thu, Mar 12, 2020 @ 09:45 AM - 10:45 AM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Meredith Gore, PhD
Talk Title: Wildlife Trafficking in the Anthropocene: Conservation, Crime & Communities
Series: USC Center for Artificial Intelligence in Society (CAIS) Seminar Series
Abstract: Levels of unsustainable and illegal natural resource exploitation have escalated in scope, scale, and severity. Illegal over-harvest of plant and animal species occurs around the world and poses risks to species, ecosystems, and people. Beyond the risk of species loss, overexploitation represents stolen natural resources, is associated with corruption and insecurity, human rights abuses, and regional destabilization in some of the world's most vulnerable developing nations. This presentation will discuss conservation criminology-”an interdisciplinary and applied science for understanding risks to global natural resources.
Biography: Dr. Meredith Gore is a conservation social scientist leveraging concepts of risk to enhance understanding of human-environment relationships. Her scholarship is designed to build evidence for action. The majority of her scientific inquiry can be described as convergence research on conservation issues such as wildlife trafficking, illegal logging, fishing and mining. She received her PhD in Natural Resource Policy and Management from Cornell University, MA in Environment and Resource Policy from George Washington University, and BA in Anthropology and Environmental Studies from Brandeis University. She's a MSU Global Research Academy Fellow, National Academies of Sciences Jefferson Science Fellow, US Department of State Embassy Science Fellow and Emerging Wildlife Conservation Leader.
Host: USC Center for Artificial Intelligence in Society (CAIS)
Location: Ronald Tutor Hall of Engineering (RTH) - 211
Audiences: Everyone Is Invited
Contact: Computer Science Department
-
CS Colloquium: Ludwig Schmidt (UC Berkeley) - Do ImageNet Classifiers Generalize to ImageNet?
Thu, Mar 12, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Ludwig Schmidt, UC Berkeley
Talk Title: Do ImageNet Classifiers Generalize to ImageNet?
Series: CS Colloquium
Abstract: Progress on the ImageNet dataset seeded much of the excitement around the machine learning revolution of the past decade. In this talk, we analyze this progress in order to understand the obstacles blocking the path towards safe, dependable, and secure machine learning.
First, we will investigate the nature and extent of overfitting on ML benchmarks through reproducibility experiments for ImageNet and other key datasets. Our results show that overfitting through test set re-use is surprisingly absent, but distribution shift poses a major open problem for reliable ML.
In the second part, we will focus on a particular robustness issue, known as adversarial examples, and develop methods inspired by optimization and generalization theory to address this issue. We conclude with a large experimental study of current robustness interventions that summarizes the main challenges going forward.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Ludwig Schmidt is a postdoctoral researcher at UC Berkeley working with Moritz Hardt and Ben Recht. Ludwig's research interests revolve around the empirical and theoretical foundations of machine learning, often with a focus on making machine learning more reliable. Before Berkeley, Ludwig completed his PhD at MIT under the supervision of Piotr Indyk. Ludwig received a Google PhD fellowship, a Microsoft Simons fellowship, a best paper award at the International Conference on Machine Learning (ICML), and the Sprowls dissertation award from MIT.
Host: Haipeng Luo
Location: Olin Hall of Engineering (OHE) - 132
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Astani Civil and Environmental Engineering Seminar
Thu, Mar 12, 2020 @ 03:00 PM - 04:00 PM
Sonny Astani Department of Civil and Environmental Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Larry Rossen, Professor Emeritus of Psychology Department, California State University, Dominguez Hill
Talk Title: WE ARE FACING AN ATTENTION CRISIS: WHAT IS DRIVING OUR DISTRACTED MINDS?
Abstract: See attached abstract.
Host: Dr. Burcin Becerik-Gerber
More Information: Larry Rossen-Abstract_3-12-2020.pdf
Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132
Audiences: Everyone Is Invited
Contact: Evangeline Reyes
-
**LOCATION CHANGE**CS Colloquium: Ioannis Panageas (SUTD) - Depth-width trade-offs for ReLU networks via Sharkovsky's theorem
Thu, Mar 12, 2020 @ 04:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Ioannis Panageas, Singapore University of Technology and Design
Talk Title: Depth-width trade-offs for ReLU networks via Sharkovsky's theorem
Series: CS Colloquium
Abstract: Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory. In a seminal paper, Telgarsky highlighted the benefits of depth by presenting a family of functions (based on simple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error. Even though Telgarsky's work reveals the limitations of shallow neural networks, it does not inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths. In this talk, we will point to a new connection between DNNs expressivity and Sharkovsky's Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of generalized notion of fixed points, called periodic points (a fixed point is a point of period 1). Motivated by our observation that the triangle waves used in Telgarsky's work contain points of period 3 - a period that is special in that it implies chaotic behavior based on the celebrated result by Li-Yorke - we will give general lower bounds for the width needed to represent periodic functions as a function of the depth. Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical system associated with such functions.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Ioannis Panageas is an Assistant Professor at Information Systems Department of SUTD since September 2018. Prior to that he was a MIT postdoctoral fellow working with Constantinos Daskalakis. He received his PhD in Algorithms, Combinatorics and Optimization from Georgia Institute of Technology in 2016, a Diploma in EECS from National Technical University of Athens (summa cum laude) and a M.Sc. in Mathematics from Georgia Institute of Technology. His work lies on the intersection of optimization, probability, learning theory, dynamical systems and algorithms. He is the recipient of the 2019 NRF fellowship for AI (analogue of NSF CAREER award).
URL Website: https://panageas.github.io/
Host: Shaddin Dughmi
Location: Ronald Tutor Hall of Engineering (RTH) - 115
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Port of Los Angeles - Conflict, Commerce, and the Flight for Control - Book Launch and Lecture
Thu, Mar 12, 2020 @ 06:30 PM - 07:30 PM
Sonny Astani Department of Civil and Environmental Engineering
Conferences, Lectures, & Seminars
Speaker: Geraldine Knatz, PhD, Professor of the Practice of Policy and Engineering at USC
Talk Title: Port of Los Angeles - Conflict, Commerce, and the Flight for Control - Book Launch and Lecture
Abstract: With years of research, 200 maps and images, Knatz shapes and insightful story of the Port of Los Angeles, from its early entrepreneurs to the city's business and political leadership, and the inevitable conflicts that arose between them. Power moves disguised as bureaucratic banalities, jurisdictional feuds, and outright warfare - it is all here.
Host: .
More Information: Knatz Book Launch and Lecture.jpg
Location: Ralph And Goldy Lewis Hall (RGL) - 101
Audiences: Everyone Is Invited
Contact: Salina Palacios
-
ISE 651 - Epstein Seminar
Tue, Mar 17, 2020 @ 03:30 PM - 04:50 PM
Daniel J. Epstein Department of Industrial and Systems Engineering
Conferences, Lectures, & Seminars
Talk Title: ***NO SEMINAR - SPRING RECESS***
Location: Ethel Percy Andrus Gerontology Center (GER) - 206
Audiences: Everyone Is Invited
Contact: Grace Owh
-
ECE Seminar: Safe Deep Learning in the Feedback Loop: A Robust Control Approach
Mon, Mar 23, 2020 @ 11:00 AM - 12:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Mahyar Fazlyab, Postdoctoral Researcher, Dept of ESE, University of Pennsylvania
Talk Title: Safe Deep Learning in the Feedback Loop: A Robust Control Approach
Abstract: Despite high-profile advances in various decision-making and classification tasks, Deep Neural Networks (DNNs) face several fundamental challenges that limit their adoption in physical or safety-critical domains. In particular, DNNs can be vulnerable to adversarial attacks and input perturbations. This issue becomes even more pressing when DNNs are used in closed-loop systems, where a small perturbation (caused by, for example, noisy measurements, uncertain initial conditions, or disturbances) can substantially impact the system being controlled. Therefore, it is of utmost importance to develop tools that can provide useful certificates of stability, safety, and robustness for DNN-driven systems.
In this talk, I will present a new framework, rooted in convex optimization and robust control, for safety verification and robustness analysis of DNNs based on semidefinite programming. The main idea is to abstract the original, nonlinear, hard-to-analyze neural network by a Quadratically-Constrained Linear Network (QCLN), in which the nonlinear components (e.g., the activation functions) are described by the quadratic constraints that all their input-output instances satisfy. This abstraction allows us to analyze various properties of DNNs (safety, local and global robustness, etc.) using semidefinite programming.
Biography: Mahyar Fazlyab received the Bachelor's and Master's degrees in mechanical engineering from Sharif University of Technology, Tehran, Iran, in 2010 and 2013, respectively. He earned a Master's degree in statistics and a Ph.D. degree in Electrical and Systems Engineering (ESE) from the University of Pennsylvania (UPenn), Philadelphia, PA, USA, in 2018. Currently, he is a Postdoctoral Researcher at UPenn. His research interests are at the intersection of optimization, control, and machine learning. His current work focuses on developing optimization-based methods for safety verification of learning-enabled control systems. Dr. Fazlyab won the Joseph and Rosaline Wolf Best Doctoral Dissertation Award in 2019, awarded by the ESE Department at UPenn.
Host: Mihailo Jovanovic, mihailo@usc.edu, 213.740.4474
Webcast: https://usc.zoom.us/j/871407253WebCast Link: https://usc.zoom.us/j/871407253
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
-
Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Simon S. Du (Princeton University) - Foundations of Learning Systems with (Deep) Function Approximators
Tue, Mar 24, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Simon S. Du, Princeton University
Talk Title: Foundations of Learning Systems with (Deep) Function Approximators
Series: CS Colloquium
Abstract: Function approximators, such as deep neural networks, play a crucial role in building learning systems that make predictions and decisions. In this talk, I will discuss my work on understanding, designing, and applying function approximators.
First, I will focus on understanding deep neural networks. The main result is that the over-parameterized neural network is equivalent to a new kernel, Neural Tangent Kernel. This equivalence implies two surprising phenomena: 1) the simple algorithm gradient descent provably finds the global optimum of the highly non-convex empirical risk, and 2) the learned neural network generalizes well despite being highly over-parameterized. Furthermore, this equivalence helps us design a new class of function approximators: we transform (fully-connected and graph) neural networks to (fully-connected and graph) Neural Tangent Kernels, which achieve superior performance on standard benchmarks.
In the second part of the talk, I will focus on applying function approximators to decision-making, aka reinforcement learning, problems. In sharp contrast to the (simpler) supervised prediction problems, solving reinforcement learning problems requires an exponential number of samples, even if one applies function approximators. I will then discuss what additional structures that permit statistically efficient algorithms.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Simon S. Du is a postdoc at the Institute for Advanced Study of Princeton, hosted by Sanjeev Arora. He completed his Ph.D. in Machine Learning at Carnegie Mellon University, where he was co-advised by Aarti Singh and Barnabás Póczos. Previously, he studied EECS and EMS at UC Berkeley. He has also spent time at Simons Institute and research labs of Facebook, Google, and Microsoft. His research interests are broadly in machine learning, with a focus on the foundations of deep learning and reinforcement learning.
Host: Haipeng Luo
Location: Seminar will be exclusively online (no in-room presentation)
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
ECE Seminar: Reliability, Equity, and Reproducibility in Modern Machine Learning
Tue, Mar 24, 2020 @ 11:00 AM - 12:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Yaniv Romano, Postdoctoral Scholar, Dept of Statistics, Stanford University
Talk Title: Reliability, Equity, and Reproducibility in Modern Machine Learning
Abstract: Modern machine learning algorithms have achieved remarkable performance in a myriad of applications, and are increasingly used to make impactful decisions in the hiring process, criminal sentencing, healthcare diagnostics and even to make new scientific discoveries. The use of data-driven algorithms in high-stakes applications is exciting yet alarming: these methods are extremely complex, often brittle, notoriously hard to analyze and interpret. Naturally, concerns have raised about the reliability, fairness, and reproducibility of the output of such algorithms. This talk introduces statistical tools that can be wrapped around any "black-box" algorithm to provide valid inferential results while taking advantage of their impressive performance. We present novel developments in conformal prediction and quantile regression, which rigorously guarantee the reliability of complex predictive models, and show how these methodologies can be used to treat individuals equitably. Next, we focus on reproducibility and introduce an operational selective inference tool that builds upon the knockoff framework and leverages recent progress in deep generative models. This methodology allows for reliable identification of a subset of important features that is likely to explain a phenomenon under-study in a challenging setting where the data distribution is unknown, e.g., mutations that are truly linked to changes in drug resistance.
Biography: Yaniv Romano is a postdoctoral scholar in the Department of Statistics at Stanford University, advised by Prof. Emmanuel Candes. He earned his Ph.D. and M.Sc. degrees in 2017 from the Department of Electrical Engineering at the Technion-”Israel Institute of Technology, under the supervision of Prof. Michael Elad. Before that, in 2012, Yaniv received his B.Sc. from the same department. His research spans the theory and practice of selective inference, sparse approximation, machine learning, data science, and signal and image processing. His goal is to advance the theory and practice of modern machine learning, as well as to develop statistical tools that can be wrapped around any data-driven algorithm to provide valid inferential results. Yaniv is also interested in image recovery problems: the super-resolution technology he invented together with Dr. Peyman Milanfar is being used in Google's flagship products, increasing the quality of billions of images and bringing significant bandwidth savings. In 2017, he constructed with Prof. Michael Elad a MOOC on the theory and practice of sparse representations, under the edX platform. Yaniv is a recipient of the 2015 Zeff Fellowship, the 2017 Andrew and Erna Finci Viterbi Fellowship, the 2017 Irwin and Joan Jacobs Fellowship, the 2018-2020 Zuckerman Postdoctoral Fellowship, the 2018-2020 ISEF Postdoctoral Fellowship, the 2018-2020 Viterbi Fellowship for nurturing future faculty members, Technion, and the 2019-2020 Koret Postdoctoral Scholarship, Stanford University. Yaniv was awarded the 2020 SIAG/IS Early Career Prize.
Host: Salman Avestimehr, avestime@usc.edu
Webcast: https://usc.zoom.us/j/782728120WebCast Link: https://usc.zoom.us/j/782728120
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
-
**CANCELED** SE 651 - Epstein Seminar
Tue, Mar 24, 2020 @ 03:30 PM - 04:50 PM
Daniel J. Epstein Department of Industrial and Systems Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Adam Elmachtoub, Assistant Professor, Columbia University
Talk Title: TBD
Host: Dr. Phebe Vayanos
Location: Ethel Percy Andrus Gerontology Center (GER) - 206
Audiences: Everyone Is Invited
Contact: Grace Owh
-
Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Farnaz Behrang (Georgia Institute of Technology) - Leveraging Existing Software Artifacts to Support Design, Development, and Testing of Mobile Applications
Wed, Mar 25, 2020 @ 02:00 PM - 03:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Farnaz Behrang, Georgia Institute of Technology
Talk Title: Leveraging Existing Software Artifacts to Support Design, Development, and Testing of Mobile Applications
Series: CS Colloquium
Abstract: We are living in the era of big data, in which generating and sharing data has become much easier, and massive amounts of information are created in a fraction of a second. In the context of software engineering, in particular, the number of open-source software repositories (e.g., GitHub, Bitbucket, SourceForge) where software developers share their software artifacts is ever-increasing, and hundreds of millions of lines of code are freely available and easily accessible. This has resulted in an increasing interest in analyzing the rich data available in such repositories. In the past decade, researchers have been mining online repositories to take advantage of existing source code to support different development activities, such as bug prediction, refactoring, and API updates. Despite the large number of proposed techniques that leverage existing source code, however, these techniques mostly focus on supporting coding activities. Other important software engineering tasks, such as software design and testing, have been mostly ignored by previous work.
In this talk, I will present my research on leveraging existing source code and other related artifacts (e.g., test cases) to support the design, development, and testing of mobile applications using automated techniques. I will first present a technique that leverages the growing number of open-source apps in public repositories to support app design and development. I will then present techniques that take advantage of existing test cases to reduce the cost of testing mobile apps. I will conclude my talk sketching future research directions that I plan to pursue.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Farnaz Behrang is a Ph.D. candidate in the School of Computer Science at the Georgia Institute of Technology. Her research interests lie primarily in the area of software engineering, with a focus on software analysis and testing. Her research goal is to develop automated techniques and tools that improve software quality and developer productivity. Her work has been recognized with several awards including ACM SIGSOFT Distinguished Paper Awards at MOBILESOFT 2018 and FSE 2015.
Host: Chao Wang
Location: Seminar will be exclusively online (no in-room presentation)
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Aditya Grover (Stanford University) - Machine Learning for Accelerating Scientific Discovery
Thu, Mar 26, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Aditya Grover, Stanford University
Talk Title: Machine Learning for Accelerating Scientific Discovery
Series: CS Colloquium
Abstract: The dramatic increase in both sensor capabilities and computational power over the last few decades has created enormous opportunities for using machine learning (ML) to enhance scientific discovery. To realize this potential, ML systems must seamlessly integrate with the key tools for scientific discovery. For instance, how can we incorporate scientific domain knowledge within ML algorithms? How can we use ML to quantify uncertainty in simulations? How can we use ML to plan experiments under real-world budget constraints? For these questions, I'll first present the key computational and statistical challenges through the lens of probabilistic modeling. Next, I'll highlight limitations of existing approaches for scaling to high-dimensional data and present algorithms from my research that can effectively overcome these challenges. These algorithms are theoretically principled, domain-agnostic, and exhibit strong empirical performance. Notably, I'll describe a collaboration with chemists and material scientists where we used probabilistic models to efficiently optimize an experimental pipeline for electric batteries. Finally, I'll conclude with an overview of future opportunities for using ML to accelerate scientific discovery.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Aditya Grover is a fifth-year Ph.D. candidate in Computer Science at Stanford University advised by Stefano Ermon. His research focuses on probabilistic modeling and reasoning and is grounded in real-world scientific applications. Aditya's research has been published in top scientific and ML/AI venues (e.g., Nature, NeurIPS, ICML, ICLR, AAAI, AISTATS), included in widely-used open source ML software, and deployed into production at major technology companies. His work has been recognized with a best paper award (StarAI), a Lieberman Fellowship, a Data Science Institute Scholarship, and a Microsoft Research Ph.D. Fellowship. He is also a Teaching Fellow at Stanford since 2018, where he co-created and teaches a new class on Deep Generative Models. Previously, Aditya obtained his bachelors in Computer Science and Engineering from IIT Delhi in 2015, where he received a best undergraduate thesis award.
Host: Bistra Dilkina
Location: Seminar will be exclusively online (no in-room presentation)
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Zhihao Jia (Stanford University) - Automated Discovery of Machine Learning Optimizations
Thu, Mar 26, 2020 @ 04:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Zhihao Jia, Stanford University
Talk Title: Automated Discovery of Machine Learning Optimizations
Series: CS Colloquium
Abstract: As an increasingly important workload, machine learning (ML) applications require different performance optimization techniques from traditional runtimes and compilers. In particular, to accelerate ML applications, it is generally necessary to perform ML computations on heterogeneous hardware and parallelize computations using multiple data dimensions, neither of which is even expressible in traditional compilers and runtimes. In this talk, I will describe my work on automated discovery of performance optimizations to accelerate ML computations.
TASO, the Tensor Algebra SuperOptimizer, optimizes the computation graphs of deep neural networks (DNNs) by automatically generating potential graph optimizations and formally verifying their correctness. TASO outperforms rule-based graph optimizers in existing ML systems (e.g., TensorFlow, TensorRT, and TVM) by up to 3x by automatically discovering novel graph optimizations, while also requiring significantly less human effort.
FlexFlow is a system for accelerating distributed DNN training. FlexFlow identifies parallelization dimensions not considered in existing ML systems (e.g., TensorFlow and PyTorch) and automatically discovers fast parallelization strategies for a specific parallel machine. Companies and national labs are using FlexFlow to train production ML models that do not scale well in current ML systems, achieving over 10x performance improvement.
I will also outline future research directions for further automating ML systems, such as codesigning ML models, software systems, and hardware backends for end-to-end ML deployment.
This lecture satisfies requirements for CSCI 591: Research Colloquium.
Biography: Zhihao Jia is a Ph.D. candidate in the Computer Science department at Stanford University working with Alex Aiken and Matei Zaharia. His research interests lie in the intersection of computer systems and machine learning, with a focus on building efficient, scalable, and high-performance systems for ML computations.
Host: Leana Golubchik
Location: Seminar will be exclusively online (no in-room presentation)
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
ECE Seminar: Label-free Optical Imaging of Living Biological Systems
Mon, Mar 30, 2020 @ 11:00 AM - 12:00 PM
Ming Hsieh Department of Electrical and Computer Engineering
Conferences, Lectures, & Seminars
Speaker: Sixian You, PhD, Bioengineering, UIUC
Talk Title: Label-free Optical Imaging of Living Biological Systems
Abstract: Label-free optical imaging of living biological systems offers rich information that can be of immense value in biomedical tasks such as diagnosing cancer or assessing the tumor microenvironment. Despite the exceptional theoretical potential, current label-free nonlinear microscopy platforms are challenging for real-world clinical and biological applications. The major obstacles include the lack of flexible laser sources, limited contrast, and lack of molecular specificity for diseases.
In this talk, I will present a new optical imaging platform and methodology that will address these challenges. By generating and tailoring coherent supercontinuum from photonic crystal fibers, single-source single-shot metabolic and structural imaging can be achieved, enabling Simultaneous Label-free Auto-fluorescence Multi-harmonic (SLAM) contrast in living cells and tissues. These capabilities further motivate development of analytical tools for tissue assessment and diagnosis, showing broad potential of this label-free imaging technology in discovering new metabolic biomarkers and enabling real-time point-of-procedure applications.
Biography: Sixian You received her Ph.D. in 2019 from the University of Illinois, Urbana-Champaign (UIUC), under the guidance of Prof. Stephen A. Boppart. Her primary research interest is in developing innovative optical imaging solutions for biomedicine. She is particularly interested in developing next-generation label-free multiphoton imaging technologies to study the tumor microenvironment. Sixian was awarded the Microscopy Innovation Award by the Microscopy Society of America and McGinnis Medical Innovation Graduate Fellowship by UIUC.
Host: Justin Haldar, jhaldar@usc.edu
Webcast: https://usc.zoom.us/j/402440976WebCast Link: https://usc.zoom.us/j/402440976
Audiences: Everyone Is Invited
Contact: Mayumi Thrasher
-
Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Alan Liu (Carnegie Mellon University) - Enabling Future-Proof Telemetry for Networked Systems
Tue, Mar 31, 2020 @ 11:00 AM - 12:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Alan Liu, Carnegie Mellon University
Talk Title: Enabling Future-Proof Telemetry for Networked Systems
Series: CS Colloquium
Abstract: Today's networked systems, such as data center, cellular, and sensor networks, face increasing demands on security, performance, and reliability. To fulfill these demands, we first need to obtain timely and accurate telemetry information about what is happening in the system. For instance, understanding the volume and the number of distinct network connections can help detect and mitigate network attacks. In storage systems, identifying hot items can help balance the server load. Unfortunately, existing telemetry tools cannot robustly handle multiple telemetry tasks with diverse workloads and resource constraints.
In this talk, I will present my research that focuses on building telemetry systems that are future-proof for current and unforeseen telemetry tasks, diverse workloads, and heterogeneous platforms. I will discuss the efficient algorithms and implementations that realize this future-proof vision in network monitoring for hardware and software platforms. I will describe how bridging theory and practice with sketching and sampling algorithms can significantly reduce memory footprints and speedup computations while providing robust results. Finally, I will end the talk with new directions in obtaining future-proof analytics for other types of networked systems, such as low-power sensors and mobile devices, while enhancing energy efficiency and data privacy.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Alan (Zaoxing) Liu is a postdoctoral researcher at Carnegie Mellon University. His research interests are in networked and distributed systems with a recent focus on efficient system and algorithmic design for telemetry, big-data analytics, and privacy. His research papers have been published in venues such as ACM SIGCOMM, USENIX FAST, and OSDI. He is a recipient of the best paper award at USENIX FAST'19 for his work on large-scale distributed load balancing. His work received multiples recognitions, including ACM STOC "Best-of-Theory" plenary talk and USENIX ATC "Best-of-Rest". Prior to CMU, he obtained his Ph.D. in Computer Science from Johns Hopkins University.
Host: Ramesh Govindan
Location: Seminar will be exclusively online (no in-room presentation)
Audiences: Everyone Is Invited
Contact: Assistant to CS chair
-
**CANCELED** ISE 651 - Epstein Seminar
Tue, Mar 31, 2020 @ 03:30 PM - 04:50 PM
Daniel J. Epstein Department of Industrial and Systems Engineering
Conferences, Lectures, & Seminars
Speaker: Dr. Daniel W. Apley, Professor, Northwestern University
Talk Title: TBD
Host: Dr. Qiang Huang
Location: Ethel Percy Andrus Gerontology Center (GER) - 206
Audiences: Everyone Is Invited
Contact: Grace Owh
-
Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Baharan Mirzasoleiman (Stanford University) - Efficient Machine Learning via Data Summarization
Tue, Mar 31, 2020 @ 04:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Baharan Mirzasoleiman, Stanford University
Talk Title: Efficient Machine Learning via Data Summarization
Series: CS Colloquium
Abstract: Large datasets have been crucial to the success of modern machine learning models. However, training on massive data has two major limitations. First, it is contingent on exceptionally large and expensive computational resources, and incurs a substantial cost due to the significant energy consumption.
Second, in many real-world applications such as medical diagnosis and self-driving cars, big data contains highly imbalanced classes and noisy labels. In such cases, training on the entire data does not result in a high-quality model. In this talk, I will argue that we can address the above limitations by developing techniques that can identify and extract the representative subsets from massive datasets. Training on representative subsets not only reduces the substantial costs of learning from big data, but also improves their accuracy and robustness against noisy labels. I will present two key aspects to achieve this goal: (1) extracting the representative data points by summarizing massive datasets; and (2) developing efficient optimization methods to learn from the extracted summaries. I will discuss how we can develop theoretically rigorous techniques that provide strong guarantees for the quality of the extracted summaries, and the learned models' quality and robustness against noisy labels. I will also show the applications of these techniques to several problems, including summarizing massive image collections, online video summarization, and speeding up training machine learning models.
This lecture satisfies requirements for CSCI 591: Research Colloquium
Biography: Baharan Mirzasoleiman is a Postdoctoral Research Scholar in Computer Science Department at Stanford University, where she works with Prof. Jure Leskovec. Baharan's research focuses on developing new methods that enable efficient exploration and learning from massive datasets. She received her PhD from ETH Zurich, working with Prof. Andreas Krause. She has also spent two summers as an intern at Google Research. She was awarded an ETH medal for Outstanding Doctoral Dissertation, and a Google Anita Borg Memorial Scholarship. She was also selected as a Rising Star in EECS from MIT.
Host: Bistra Dilkina
Location: Seminar will be exclusively online (no in-room presentation)
Audiences: Everyone Is Invited
Contact: Assistant to CS chair