Logo: University of Southern California

Events Calendar

Select a calendar:

Filter November Events by Event Type:

Events for November 18, 2022

  • DEN@Viterbi - 'Limited Status: How to Get Started' Virtual Info Session

    Fri, Nov 18, 2022 @ 11:00 AM - 12:00 PM

    DEN@Viterbi, Viterbi School of Engineering Graduate Admission

    Workshops & Infosessions

    Join USC Viterbi for our upcoming Limited Status: How to Get Started Virtual Information Session via WebEx to learn about the Limited Status enrollment option. The Limited Status enrollment option allows individuals with an undergraduate degree in engineering or related field, with a 3.0 GPA or above to take courses before applying for formal admission into a Viterbi graduate degree program.

    USC Viterbi representatives will provide a step-by-step guide for how to get started as a Limited Status student and enroll in courses online via DEN@Viterbi as early as the Spring 2023 semester.

    Register Now!

    WebCast Link: https://uscviterbi.webex.com/uscviterbi/onstage/g.php?MTID=e21c9e9c52fa44bd72f923ea7aadbff75

    Audiences: Everyone Is Invited

    Contact: Corporate & Professional Programs

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • The EiS Communications Hub's Three Minute Thesis Contest for Ph.D. Students

    Fri, Nov 18, 2022 @ 12:00 PM - 01:30 PM

    Viterbi School of Engineering Student Affairs

    Workshops & Infosessions

    The EiS Communications Hub presents USC Viterbi's first Three Minute Thesis contest!

    An 80,000-word thesis would take 9 hours to present, but in this contest, Ph.D. students will present their research in just 3 minutes with just one slide.

    Join us and support Ph.D. students as they share their work! Vote for your favorite speaker as the "people's choice!"

    Questions? Contact eishub@usc.edu.

    Location: Ronald Tutor Hall of Engineering (RTH) - 115

    Audiences: Everyone Is Invited

    Contact: Helen Choi

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • ECE Seminar: Learning Efficiently in Data-Scarce Regimes

    ECE Seminar: Learning Efficiently in Data-Scarce Regimes

    Fri, Nov 18, 2022 @ 01:00 PM - 02:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars

    Speaker: Dr. Mohammad Rostami, Research Assistant Professor, Dept of CS / Research Lead, USC-ISI

    Talk Title: Learning Efficiently in Data-Scarce Regimes

    Abstract: The unprecedented processing demand, posed by the explosion of big data, challenges researchers to design efficient and adaptive machine learning algorithms that do not require persistent retraining and avoid learning redundant information. Inspired from learning techniques of intelligent biological agents, identifying transferable knowledge across learning problems has been a significant research focus to improve machine learning algorithms. In this talk, we explain how the challenges of knowledge transfer can be addressed through embedding spaces that capture and store hierarchical knowledge.

    We first focus on the problem of cross-domain knowledge transfer. We explore the problem of zero-shot image classification, where the goal is to identify images from unseen classes using semantic descriptions of these classes. We train two coupled dictionaries that align visual and semantic domains via an intermediate embedding space. We then extend this idea by training deep networks that match data distributions of two visual domains in a shared cross-domain embedding space.

    We then investigate the problem of cross-task knowledge transfer in sequential learning settings. Here, the goal is to identify relations and similarities of multiple machine learning tasks to improve performance across the tasks. We first address the problem of zero-shot learning in a lifelong machine learning setting, where the goal is to learn tasks with no data using high-level task descriptions. Our idea is to relate high-level task descriptors to the optimal task parameters through an embedding space. We then develop a method to overcome the problem of catastrophic forgetting within a continual learning setting of deep neural networks by enforcing the tasks to share the same distribution in the embedding space.

    Finally, we focus on current research directions to expand past progress and plans for future research directions. Through this talk, we demonstrate that despite major differences, problems within the above learning scenarios can be tackled using a unifying strategy that allows transferring knowledge effectively.

    Biography: Mohammad Rostami is a research assistant professor at the USC CS department and a research lead at the USC Information Sciences Institute. He received Ph.D. degree in Electrical and Systems Engineering from the University of Pennsylvania in August 2019. He also received an M.S. degree in Robotics and M.A. degree in Philosophy at Penn. Before Penn, he obtained an M.Sc. degree in Electrical and Computer Engineering from the University of Waterloo, and his B.Sc. degree in Electrical Engineering and B.Sc. degree in Mathematics from the Sharif University of Technology. His current research area is machine learning in time-dependent and data-scarce regimes.

    Host: Dr. Richard M. Leahy

    Webcast: https://usc.zoom.us/j/97552157471?pwd=RnVGWm10RlRORFU0cG5RYWVWU0R0Zz09

    More Information: Seminar Announcement-Rostami-111822.pdf

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 539

    WebCast Link: https://usc.zoom.us/j/97552157471?pwd=RnVGWm10RlRORFU0cG5RYWVWU0R0Zz09

    Audiences: Everyone Is Invited

    Contact: Mayumi Thrasher

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Thesis Proposal - Zimo Li

    Fri, Nov 18, 2022 @ 02:00 PM - 03:30 PM

    Thomas Lord Department of Computer Science

    University Calendar

    PhD Candidate: Zimo Li

    Title: Human Appearance and Performance Synthesis Using Deep Learning

    Stefanos Nikolaidis
    Aiichiro Nakano
    Andrew Nealen
    Lauren Itti
    Mike Zyda


    Synthesis of human performances is a highly sought after technology in several industries. In this presentation, we will go over several new deep learning solutions which tackle the problems of human facial and body performance synthesis.

    Facial performance synthesis is a complex multi-step graphics problem. First, the "target" performance to be modified must be tracked and captured accurately. Then, based on the desired modification (whether to change the identity, facial expressions, or both), a modified "source performance" must be synthesized and/or captured from a different actor. Finally, the original facial performance must be removed and replaced with the synthesized one. This multi-step process poses many unique challenges. Using conventional CG tracking and retargeting of expressions from the source to target using a 3D mesh and static texture will give an undesired "rubbery skin" effect. Furthermore, inaccuracies in the expression tracking of the source performance using a blendshape model will result in the "uncanny valley" effect in the output performance. It is often necessary to use costly capture methods, such as a Light Stage, to obtain highly accurate 3D captures and dynamic textures of a source performance in order to avoid these pitfalls. Even then, final modified performances are often uncanny.

    When dealing with human body-motion synthesis, creating new motions often requires manual artist animations, tracking new motions on an actor, or stitching together subsequences of previous animations. These methods are limited by cost, or are not able to generate appreciably novel motions.

    Over the last several years, the advancement of AI-based generation techniques have let us address many of these issues. In this presentation, we will go over several novel techniques which reduce the cost (time/money/ease-of-access), and/or improve the quality of facial re-enactment, as well as body motion synthesis, pipelines. The applications of these techniques allow us to tackle new problem settings in an efficient way, including visual dubbing (changing the lip motions of a facial performance), dynamic texture synthesis, 3D model generation, as well as extended human motion synthesis.

    WebCast Link: https://us05web.zoom.us/j/81890781474?pwd=cjQ3YkVDT3drMlQ2VWtlbjU2YWxyZz09

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File