Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:



Events for the 4th week of March

  • ECE Seminar: Safe Deep Learning in the Feedback Loop: A Robust Control Approach

    ECE Seminar: Safe Deep Learning in the Feedback Loop: A Robust Control Approach

    Mon, Mar 23, 2020 @ 11:00 AM - 12:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Mahyar Fazlyab, Postdoctoral Researcher, Dept of ESE, University of Pennsylvania

    Talk Title: Safe Deep Learning in the Feedback Loop: A Robust Control Approach

    Abstract: Despite high-profile advances in various decision-making and classification tasks, Deep Neural Networks (DNNs) face several fundamental challenges that limit their adoption in physical or safety-critical domains. In particular, DNNs can be vulnerable to adversarial attacks and input perturbations. This issue becomes even more pressing when DNNs are used in closed-loop systems, where a small perturbation (caused by, for example, noisy measurements, uncertain initial conditions, or disturbances) can substantially impact the system being controlled. Therefore, it is of utmost importance to develop tools that can provide useful certificates of stability, safety, and robustness for DNN-driven systems.

    In this talk, I will present a new framework, rooted in convex optimization and robust control, for safety verification and robustness analysis of DNNs based on semidefinite programming. The main idea is to abstract the original, nonlinear, hard-to-analyze neural network by a Quadratically-Constrained Linear Network (QCLN), in which the nonlinear components (e.g., the activation functions) are described by the quadratic constraints that all their input-output instances satisfy. This abstraction allows us to analyze various properties of DNNs (safety, local and global robustness, etc.) using semidefinite programming.

    Biography: Mahyar Fazlyab received the Bachelor's and Master's degrees in mechanical engineering from Sharif University of Technology, Tehran, Iran, in 2010 and 2013, respectively. He earned a Master's degree in statistics and a Ph.D. degree in Electrical and Systems Engineering (ESE) from the University of Pennsylvania (UPenn), Philadelphia, PA, USA, in 2018. Currently, he is a Postdoctoral Researcher at UPenn. His research interests are at the intersection of optimization, control, and machine learning. His current work focuses on developing optimization-based methods for safety verification of learning-enabled control systems. Dr. Fazlyab won the Joseph and Rosaline Wolf Best Doctoral Dissertation Award in 2019, awarded by the ESE Department at UPenn.

    Host: Mihailo Jovanovic, mihailo@usc.edu, 213.740.4474

    Webcast: https://usc.zoom.us/j/871407253

    WebCast Link: https://usc.zoom.us/j/871407253

    Audiences: Everyone Is Invited

    Contact: Mayumi Thrasher

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Simon S. Du (Princeton University) - Foundations of Learning Systems with (Deep) Function Approximators

    Tue, Mar 24, 2020 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Simon S. Du, Princeton University

    Talk Title: Foundations of Learning Systems with (Deep) Function Approximators

    Series: CS Colloquium

    Abstract: Function approximators, such as deep neural networks, play a crucial role in building learning systems that make predictions and decisions. In this talk, I will discuss my work on understanding, designing, and applying function approximators.

    First, I will focus on understanding deep neural networks. The main result is that the over-parameterized neural network is equivalent to a new kernel, Neural Tangent Kernel. This equivalence implies two surprising phenomena: 1) the simple algorithm gradient descent provably finds the global optimum of the highly non-convex empirical risk, and 2) the learned neural network generalizes well despite being highly over-parameterized. Furthermore, this equivalence helps us design a new class of function approximators: we transform (fully-connected and graph) neural networks to (fully-connected and graph) Neural Tangent Kernels, which achieve superior performance on standard benchmarks.

    In the second part of the talk, I will focus on applying function approximators to decision-making, aka reinforcement learning, problems. In sharp contrast to the (simpler) supervised prediction problems, solving reinforcement learning problems requires an exponential number of samples, even if one applies function approximators. I will then discuss what additional structures that permit statistically efficient algorithms.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Simon S. Du is a postdoc at the Institute for Advanced Study of Princeton, hosted by Sanjeev Arora. He completed his Ph.D. in Machine Learning at Carnegie Mellon University, where he was co-advised by Aarti Singh and Barnabás Póczos. Previously, he studied EECS and EMS at UC Berkeley. He has also spent time at Simons Institute and research labs of Facebook, Google, and Microsoft. His research interests are broadly in machine learning, with a focus on the foundations of deep learning and reinforcement learning.

    Host: Haipeng Luo

    Location: Seminar will be exclusively online (no in-room presentation)

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • ECE Seminar: Reliability, Equity, and Reproducibility in Modern Machine Learning

    ECE Seminar: Reliability, Equity, and Reproducibility in Modern Machine Learning

    Tue, Mar 24, 2020 @ 11:00 AM - 12:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Yaniv Romano, Postdoctoral Scholar, Dept of Statistics, Stanford University

    Talk Title: Reliability, Equity, and Reproducibility in Modern Machine Learning

    Abstract: Modern machine learning algorithms have achieved remarkable performance in a myriad of applications, and are increasingly used to make impactful decisions in the hiring process, criminal sentencing, healthcare diagnostics and even to make new scientific discoveries. The use of data-driven algorithms in high-stakes applications is exciting yet alarming: these methods are extremely complex, often brittle, notoriously hard to analyze and interpret. Naturally, concerns have raised about the reliability, fairness, and reproducibility of the output of such algorithms. This talk introduces statistical tools that can be wrapped around any "black-box" algorithm to provide valid inferential results while taking advantage of their impressive performance. We present novel developments in conformal prediction and quantile regression, which rigorously guarantee the reliability of complex predictive models, and show how these methodologies can be used to treat individuals equitably. Next, we focus on reproducibility and introduce an operational selective inference tool that builds upon the knockoff framework and leverages recent progress in deep generative models. This methodology allows for reliable identification of a subset of important features that is likely to explain a phenomenon under-study in a challenging setting where the data distribution is unknown, e.g., mutations that are truly linked to changes in drug resistance.

    Biography: Yaniv Romano is a postdoctoral scholar in the Department of Statistics at Stanford University, advised by Prof. Emmanuel Candes. He earned his Ph.D. and M.Sc. degrees in 2017 from the Department of Electrical Engineering at the Technion-”Israel Institute of Technology, under the supervision of Prof. Michael Elad. Before that, in 2012, Yaniv received his B.Sc. from the same department. His research spans the theory and practice of selective inference, sparse approximation, machine learning, data science, and signal and image processing. His goal is to advance the theory and practice of modern machine learning, as well as to develop statistical tools that can be wrapped around any data-driven algorithm to provide valid inferential results. Yaniv is also interested in image recovery problems: the super-resolution technology he invented together with Dr. Peyman Milanfar is being used in Google's flagship products, increasing the quality of billions of images and bringing significant bandwidth savings. In 2017, he constructed with Prof. Michael Elad a MOOC on the theory and practice of sparse representations, under the edX platform. Yaniv is a recipient of the 2015 Zeff Fellowship, the 2017 Andrew and Erna Finci Viterbi Fellowship, the 2017 Irwin and Joan Jacobs Fellowship, the 2018-2020 Zuckerman Postdoctoral Fellowship, the 2018-2020 ISEF Postdoctoral Fellowship, the 2018-2020 Viterbi Fellowship for nurturing future faculty members, Technion, and the 2019-2020 Koret Postdoctoral Scholarship, Stanford University. Yaniv was awarded the 2020 SIAG/IS Early Career Prize.

    Host: Salman Avestimehr, avestime@usc.edu

    Webcast: https://usc.zoom.us/j/782728120

    WebCast Link: https://usc.zoom.us/j/782728120

    Audiences: Everyone Is Invited

    Contact: Mayumi Thrasher

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Undergraduate Admission Virtual Information Session

    Tue, Mar 24, 2020 @ 02:00 PM - 03:00 PM

    Viterbi School of Engineering Undergraduate Admission

    Workshops & Infosessions


    Our virtual information session is a live presentation from a USC Viterbi admission counselor designed for prospective first-year students and their family members to learn more about the USC Viterbi undergraduate experience.Our session will cover an overview of our undergraduate engineering programs, the application process, and more on student life.Guests will be able to ask questions and engage in further discussion toward the end of the session.

    Please register here!

    Audiences: Everyone Is Invited

    Contact: Viterbi Admission

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • **CANCELED** SE 651 - Epstein Seminar

    Tue, Mar 24, 2020 @ 03:30 PM - 04:50 PM

    Daniel J. Epstein Department of Industrial and Systems Engineering

    Conferences, Lectures, & Seminars


    Speaker: Dr. Adam Elmachtoub, Assistant Professor, Columbia University

    Talk Title: TBD

    Host: Dr. Phebe Vayanos

    Location: Ethel Percy Andrus Gerontology Center (GER) - 206

    Audiences: Everyone Is Invited

    Contact: Grace Owh

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Computer Science General Faculty Meeting

    Wed, Mar 25, 2020 @ 12:00 PM - 02:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Bi-Weekly regular faculty meeting for invited full-time Computer Science faculty only. Event details emailed directly to attendees.

    Audiences: Invited Faculty Only

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Farnaz Behrang (Georgia Institute of Technology) - Leveraging Existing Software Artifacts to Support Design, Development, and Testing of Mobile Applications

    Wed, Mar 25, 2020 @ 02:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Farnaz Behrang, Georgia Institute of Technology

    Talk Title: Leveraging Existing Software Artifacts to Support Design, Development, and Testing of Mobile Applications

    Series: CS Colloquium

    Abstract: We are living in the era of big data, in which generating and sharing data has become much easier, and massive amounts of information are created in a fraction of a second. In the context of software engineering, in particular, the number of open-source software repositories (e.g., GitHub, Bitbucket, SourceForge) where software developers share their software artifacts is ever-increasing, and hundreds of millions of lines of code are freely available and easily accessible. This has resulted in an increasing interest in analyzing the rich data available in such repositories. In the past decade, researchers have been mining online repositories to take advantage of existing source code to support different development activities, such as bug prediction, refactoring, and API updates. Despite the large number of proposed techniques that leverage existing source code, however, these techniques mostly focus on supporting coding activities. Other important software engineering tasks, such as software design and testing, have been mostly ignored by previous work.

    In this talk, I will present my research on leveraging existing source code and other related artifacts (e.g., test cases) to support the design, development, and testing of mobile applications using automated techniques. I will first present a technique that leverages the growing number of open-source apps in public repositories to support app design and development. I will then present techniques that take advantage of existing test cases to reduce the cost of testing mobile apps. I will conclude my talk sketching future research directions that I plan to pursue.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Farnaz Behrang is a Ph.D. candidate in the School of Computer Science at the Georgia Institute of Technology. Her research interests lie primarily in the area of software engineering, with a focus on software analysis and testing. Her research goal is to develop automated techniques and tools that improve software quality and developer productivity. Her work has been recognized with several awards including ACM SIGSOFT Distinguished Paper Awards at MOBILESOFT 2018 and FSE 2015.

    Host: Chao Wang

    Location: Seminar will be exclusively online (no in-room presentation)

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Aditya Grover (Stanford University) - Machine Learning for Accelerating Scientific Discovery

    Thu, Mar 26, 2020 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Aditya Grover, Stanford University

    Talk Title: Machine Learning for Accelerating Scientific Discovery

    Series: CS Colloquium

    Abstract: The dramatic increase in both sensor capabilities and computational power over the last few decades has created enormous opportunities for using machine learning (ML) to enhance scientific discovery. To realize this potential, ML systems must seamlessly integrate with the key tools for scientific discovery. For instance, how can we incorporate scientific domain knowledge within ML algorithms? How can we use ML to quantify uncertainty in simulations? How can we use ML to plan experiments under real-world budget constraints? For these questions, I'll first present the key computational and statistical challenges through the lens of probabilistic modeling. Next, I'll highlight limitations of existing approaches for scaling to high-dimensional data and present algorithms from my research that can effectively overcome these challenges. These algorithms are theoretically principled, domain-agnostic, and exhibit strong empirical performance. Notably, I'll describe a collaboration with chemists and material scientists where we used probabilistic models to efficiently optimize an experimental pipeline for electric batteries. Finally, I'll conclude with an overview of future opportunities for using ML to accelerate scientific discovery.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Aditya Grover is a fifth-year Ph.D. candidate in Computer Science at Stanford University advised by Stefano Ermon. His research focuses on probabilistic modeling and reasoning and is grounded in real-world scientific applications. Aditya's research has been published in top scientific and ML/AI venues (e.g., Nature, NeurIPS, ICML, ICLR, AAAI, AISTATS), included in widely-used open source ML software, and deployed into production at major technology companies. His work has been recognized with a best paper award (StarAI), a Lieberman Fellowship, a Data Science Institute Scholarship, and a Microsoft Research Ph.D. Fellowship. He is also a Teaching Fellow at Stanford since 2018, where he co-created and teaches a new class on Deep Generative Models. Previously, Aditya obtained his bachelors in Computer Science and Engineering from IIT Delhi in 2015, where he received a best undergraduate thesis award.

    Host: Bistra Dilkina

    Location: Seminar will be exclusively online (no in-room presentation)

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Undergraduate Admission Virtual Information Session

    Thu, Mar 26, 2020 @ 02:00 PM - 03:00 PM

    Viterbi School of Engineering Undergraduate Admission

    Workshops & Infosessions


    Our virtual information session is a live presentation from a USC Viterbi admission counselor designed for prospective first-year students and their family members to learn more about the USC Viterbi undergraduate experience.Our session will cover an overview of our undergraduate engineering programs, the application process, and more on student life.Guests will be able to ask questions and engage in further discussion toward the end of the session.

    Please register here!

    Audiences: Everyone Is Invited

    Contact: Viterbi Admission

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Seminar will be exclusively online (no in-room presentation) - CS Colloquium: Zhihao Jia (Stanford University) - Automated Discovery of Machine Learning Optimizations

    Thu, Mar 26, 2020 @ 04:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Zhihao Jia, Stanford University

    Talk Title: Automated Discovery of Machine Learning Optimizations

    Series: CS Colloquium

    Abstract: As an increasingly important workload, machine learning (ML) applications require different performance optimization techniques from traditional runtimes and compilers. In particular, to accelerate ML applications, it is generally necessary to perform ML computations on heterogeneous hardware and parallelize computations using multiple data dimensions, neither of which is even expressible in traditional compilers and runtimes. In this talk, I will describe my work on automated discovery of performance optimizations to accelerate ML computations.

    TASO, the Tensor Algebra SuperOptimizer, optimizes the computation graphs of deep neural networks (DNNs) by automatically generating potential graph optimizations and formally verifying their correctness. TASO outperforms rule-based graph optimizers in existing ML systems (e.g., TensorFlow, TensorRT, and TVM) by up to 3x by automatically discovering novel graph optimizations, while also requiring significantly less human effort.

    FlexFlow is a system for accelerating distributed DNN training. FlexFlow identifies parallelization dimensions not considered in existing ML systems (e.g., TensorFlow and PyTorch) and automatically discovers fast parallelization strategies for a specific parallel machine. Companies and national labs are using FlexFlow to train production ML models that do not scale well in current ML systems, achieving over 10x performance improvement.

    I will also outline future research directions for further automating ML systems, such as codesigning ML models, software systems, and hardware backends for end-to-end ML deployment.

    This lecture satisfies requirements for CSCI 591: Research Colloquium.


    Biography: Zhihao Jia is a Ph.D. candidate in the Computer Science department at Stanford University working with Alex Aiken and Matei Zaharia. His research interests lie in the intersection of computer systems and machine learning, with a focus on building efficient, scalable, and high-performance systems for ML computations.

    Host: Leana Golubchik

    Location: Seminar will be exclusively online (no in-room presentation)

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Repeating EventGrammar Tutoring

    Fri, Mar 27, 2020 @ 10:00 AM - 12:00 PM

    Viterbi School of Engineering Student Affairs

    Workshops & Infosessions


    INDIVIDUAL GRAMMAR TUTORIALS
    Need help refining your grammar skills in your academic and professional writing? Meet one-on-one with professors from the Engineering Writing Program, work together on your grammar skills, and take your writing to the next level!

    ALL VITERBI UNDERGRADUATE AND GRADUATE STUDENTS WELCOME!
    Sign up here: http://bit.ly/grammaratUSC

    All sessions will be via Zoom.

    Questions? Contact helenhch@usc.edu

    Location: ZOOM

    Audiences: Graduate and Undergraduate Students

    View All Dates

    Contact: Helen Choi

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File