Logo: University of Southern California

Events Calendar



Select a calendar:



Filter November Events by Event Type:



University Calendar
Events for November

  • PhD Defense- Qingeng Xu

    Fri, Nov 11, 2022 @ 11:00 AM - 12:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Phd Candidate Name; Qiangeng Xu

    Committee Chair: Prof. Ulrich Neumann from the CS department.
    Committee Member: Prof. Jernej Barbic from the CS department.
    Committee Member: Prof. Justin Haldar from the EE department.

    Topic: Point-based Neural Radiance Fields

    Abstract:
    3D scene reconstruction is one of the core problems for 3D understanding. Reconstructing 3D scenes from 2D Images are among the hardest but most useful tasks for autonomous agents. In contrast to rendering, which obtains 2D images from 3D scenes, this task is a reverse rendering problem and can be solved by optimizing a differentiable rendering model with backpropagation.

    The current state-of-the-art reverse rendering model utilizes implicit functions such as neural radiance fields (NeRF) to represent the 3D scene. However, it can only be optimized per-scene and not scalable. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. We propose point-based neural radiance fields that combine explicit (points) and Implicit (neural radiance fields) representation by using neural 3D point clouds, with associated neural features, to model a radiance field. Our model has potential to be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline.

    Zoom link: https://usc.zoom.us/j/7125769726

    WebCast Link: https://usc.zoom.us/j/7125769726

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Thesis Proposal - Ali Alotaibi

    Tue, Nov 15, 2022 @ 08:00 AM - 10:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Ali Alotaibi

    Title: Automated Repair of Layout Accessibility Issues in Mobile Applications

    Time: Tuesday, November 15, 8:00 AM-10:00 AM PST

    Committee: William GJ Halfond (chair), Murali Annavaram, Nenad Medvidovic, Mukund Raghothaman, and Chao Wang.

    Abstract:
    An increasing number of people are now dependent on mobile devices to access data and complete essential tasks. For people with disabilities, mobile apps that violate accessibility guidelines can prevent them from carrying out these activities. Layout accessibility issues are among the top accessibility issues in mobile applications. These issues impact the accessibility of mobile apps and make them difficult to use, especially for older people and people with disabilities. Unfortunately, existing techniques are limited in helping developers debug these issues. These techniques are only capable of detecting the issues. Therefore, the repair of layout accessibility issues remains a manual process.

    Automated repair of layout accessibility issues is complicated by several challenges. First, a repair must account for multiple issues holistically in order to preserve the relative consistency of the original app design. Second, due to the complex relationship between UI components, there is no clear way of identifying the set of elements and properties that needs to be modified for a given issue. Third, assuming the relevant views and properties can be identified, the number of possible changes that need to be considered grows exponentially as more elements and properties need to be considered. Finally,
    a change in one element can create cascading changes that lead to further problems in other areas of the UI. Together, these challenges make a seemingly simple repair difficult to achieve. In this thesis proposal, I propose an automated framework for repairing layout accessibility issues in mobile applications.

    Zoom Link: https://usc.zoom.us/j/98863735277?pwd=MTVITkNqY2dQdmhKWWRkRElWeVppUT09

    WebCast Link: https://usc.zoom.us/j/98863735277?pwd=MTVITkNqY2dQdmhKWWRkRElWeVppUT09

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Aaron Chan

    Tue, Nov 15, 2022 @ 03:00 PM - 05:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Aaron Chan

    Title: Generating and Utilizing Machine Explanations for Trustworthy NLP

    Time: Tuesday, November 15, 3:00PM-5:00PM PST

    Committee: Xiang Ren (chair), Robin Jia, Jesse Thomason, Bistra Dilkina, Morteza Dehghani

    Abstract:
    Neural language models (LMs) have yielded remarkable success on a wide range of natural language processing (NLP) tasks. However, LMs sometimes exhibit undesirable behavior, which can be difficult to resolve due to LMs' opaque reasoning processes. This lack of transparency poses serious concerns about LMs' trustworthiness in high-stakes decision-making, thus motivating the use of machine explanations to automatically interpret how LMs make their predictions. In my thesis, I argue that building human trust in NLP systems requires being able to: (A) generate machine explanations for LM behavior faithfully and plausibly and (B) utilize machine explanations to improve LM generalization and decision-making. First, to address (A), I propose UNIREX, a unified learning framework for jointly optimizing machine explanations with respect to both faithfulness and plausibility, without compromising the LM's task performance. Second, for (B), I introduce ER-Test, a framework for evaluating the out-of-distribution generalization ability of LMs that are regularized via strongly-supervised machine explanations. Third, to further support (B), I present SalKG, an algorithm for improving LM generalization by regularizing LMs via weakly-supervised machine explanations. Finally, I discuss several future directions for achieving (A) and (B).

    Zoom Link: https://usc.zoom.us/j/95606515253?pwd=QzBvaVVpcWtYSFhVYzVoUi9tdHBRdz09

    WebCast Link: : https://usc.zoom.us/j/95606515253?pwd=QzBvaVVpcWtYSFhVYzVoUi9tdHBRdz09

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Aleksei Petrenko

    Wed, Nov 16, 2022 @ 03:00 PM - 04:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Aleksei Petrenko

    Thesis title: High-Throughput Methods for Simulation and Deep Reinforcement Learning

    Committee members: Gaurav S. Sukhatme (chair), Stefanos Nikolaidis, Jesse Thomason, Mike Zyda, and Rahul Jain

    Location: RTH 306
    Date: November 16. 2022
    Time: 3 pm
    Zoom link: https://usc.zoom.us/j/8712894950

    Thesis abstract:

    Advances in computing hardware and machine learning have enabled a data-driven approach to robotic autonomy where control policies are learned from raw data via interactive experience collection and learning. In this thesis we discuss a specific implementation of this approach: we show how control policies can be trained in simulated environments using model-free deep reinforcement learning techniques and then be deployed on real robotic systems.
    We build towards this vision by developing tools for efficient simulation and learning under a constrained computational budget. We improve systems design of reinforcement learning algorithms and simulators to create high-throughput GPU-accelerated infrastructure for rapid experimentation. We then apply these systems and algorithms to continuous control problems in challenging domains. We first consider the problem of quadrotor swarm coordination. By scaling up training in a CPU-based flight simulator we train robust policies that are able to control physical quadrotors flying in tight formations. We then use large batch reinforcement learning in a massively parallel physics simulator IsaacGym to learn dexterous object manipulation with a multi-fingered robotic hand and we transfer these skills from simulation to reality using automatic domain randomization.
    The high-throughput learning infrastructure developed for these and other projects is released as an open-source codebase "Sample Factory 2.0" to facilitate and accelerate further progress in the field.

    Location: Ronald Tutor Hall of Engineering (RTH) - 306

    WebCast Link: https://usc.zoom.us/j/8712894950

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Thesis Proposal - Zimo Li

    Fri, Nov 18, 2022 @ 02:00 PM - 03:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Zimo Li

    Title: Human Appearance and Performance Synthesis Using Deep Learning

    Committee:
    Stefanos Nikolaidis
    Aiichiro Nakano
    Andrew Nealen
    Lauren Itti
    Mike Zyda

    Abstract:

    Synthesis of human performances is a highly sought after technology in several industries. In this presentation, we will go over several new deep learning solutions which tackle the problems of human facial and body performance synthesis.

    Facial performance synthesis is a complex multi-step graphics problem. First, the "target" performance to be modified must be tracked and captured accurately. Then, based on the desired modification (whether to change the identity, facial expressions, or both), a modified "source performance" must be synthesized and/or captured from a different actor. Finally, the original facial performance must be removed and replaced with the synthesized one. This multi-step process poses many unique challenges. Using conventional CG tracking and retargeting of expressions from the source to target using a 3D mesh and static texture will give an undesired "rubbery skin" effect. Furthermore, inaccuracies in the expression tracking of the source performance using a blendshape model will result in the "uncanny valley" effect in the output performance. It is often necessary to use costly capture methods, such as a Light Stage, to obtain highly accurate 3D captures and dynamic textures of a source performance in order to avoid these pitfalls. Even then, final modified performances are often uncanny.

    When dealing with human body-motion synthesis, creating new motions often requires manual artist animations, tracking new motions on an actor, or stitching together subsequences of previous animations. These methods are limited by cost, or are not able to generate appreciably novel motions.

    Over the last several years, the advancement of AI-based generation techniques have let us address many of these issues. In this presentation, we will go over several novel techniques which reduce the cost (time/money/ease-of-access), and/or improve the quality of facial re-enactment, as well as body motion synthesis, pipelines. The applications of these techniques allow us to tackle new problem settings in an efficient way, including visual dubbing (changing the lip motions of a facial performance), dynamic texture synthesis, 3D model generation, as well as extended human motion synthesis.




    WebCast Link: https://us05web.zoom.us/j/81890781474?pwd=cjQ3YkVDT3drMlQ2VWtlbjU2YWxyZz09

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense- Ninareh Mehrabi

    Mon, Nov 21, 2022 @ 10:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Ninareh Mehrabi
    Date: Monday, November 21st, 2022
    Time: 10:00 am - noon PT
    Zoom Meeting ID: 986 7933 6430
    Passcode: 813783
    Or via URL: https://usc.zoom.us/j/98679336430?pwd=akpBV05CQ3o5VVlwWnpxT2piVlB3QT09

    Title: Responsible Artificial Intelligence for a Complex World

    Abstract: With the advancement of Artificial Intelligence (AI) and its omnipresent role in different applications, it is crucial to ensure that AI systems comply with responsible practices. Moreover, the environment in which AI systems learn and interact with contains various external factors that might adversely affect their behavior. Thus, those systems should be able to mitigate potentially negative impacts of such factors. This dissertation explores several important dimensions that are essential for designing responsible AI systems. First, we focus on fairness as a central concept for responsible AI systems and analyze existing biases in various data sources and models. Moreover, we describe a framework based on interpretability for generating fair and equitable outcomes. Second, we discuss robustness to external perturbations as another important property for such systems. Next, we discuss human-centered AI systems which take natural language prompts as input, demonstrate possible issues due to ambiguous interpretation of those prompts, and describe a framework for resolving such ambiguities and generating faithful outcomes to human intention. Finally, we discuss ideas for designing AI systems that can internalize ethics and form a realization about the consequences of tasks and design choices associated with them. We hope that the contributions presented in this dissertation will move us closer to having more responsible AI systems.

    WebCast Link: https://usc.zoom.us/j/98679336430?pwd=akpBV05CQ3o5VVlwWnpxT2piVlB3QT09

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Tu Do

    Mon, Nov 21, 2022 @ 01:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Tu Do

    Title: Optimizing Execution of In situ Workflows

    Committee: Ewa Deelman (Chair), Aiichiro Nakano, Viktor Prasanna, Michela Taufer

    Abstract:
    Advances in high-performance computing (HPC) allow scientific simulations to run at an ever-increasing scale, generating a large amount of data that needs to be analyzed over time. Conventionally, the simulation outputs the entire simulated data set to the file system for later post-processing. Unfortunately, the slow growth of I/O technologies compared to the computing capability of present-day processors causes an I/O bottleneck of post-processing as saving data to storage is not as fast as data is generated. According to data-centric models, a new processing paradigm has recently emerged, called in situ, where simulation data is analyzed on-the-fly to reduce the expensive I/O cost of saving massive data for post-processing. Since an in situ workflow usually consists of co-located tasks running concurrently on the same resources in an iterative manner, the execution yields complicated behaviors that create challenges in evaluating the efficiency of an in situ run. To enable efficient execution of in situ workflows, this dissertation proposes a framework to enable in situ execution between simulations and analyses and introduces a computational efficiency model to characterize efficiency of an in situ execution. By extending the proposed performance model to resource-aware performance indicators, we introduce a method to assess resource usage, resource allocation, and resource provisioning for in situ workflow ensembles. Finally, we discuss the ideas of designing effective scheduling of a workflow ensemble through determining appropriate co-scheduling strategies and resource assignment for each simulation and analysis in the ensemble.

    WebCast Link: https://usc.zoom.us/j/94496448526

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Thesis Proposal - Yunhao(Andy) Ge

    Wed, Nov 23, 2022 @ 10:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD. candidate: Yunhao(Andy) Ge

    Title: Towards trustworthy, effortless, and human-like AI in computer vision

    Time: Nov. 23, Wednesday 10:00AM-12:00AM (PST)

    Location: Room B15 (basement), Hedco Neurosciences Building, 3641 Watt Way, Los Angeles, CA 90089, USA.

    Abstract:
    Artificial Intelligence (AI) has achieved great success in various domains, such as self-driving, medical diagnosis, and mobile robotics. Model and Data, two foundations of the current AI system, play significant roles in ensuring the success of AI. However, there are still challenges that remain to be addressed:
    On the model side: how to make AI models be trustworthy and reliable? How to empower AI models with the learning and reasoning ability of the human brain? (1) Lack of trustworthiness is a big challenge: The bad transparency of model decisions hinders the understanding of errors and prevents saving more lives.
    (2) Filling the gap between the human brain and AI models is challenging. "How do we humans get so much (ability) from so little (supervision)?" How can we build more powerful learning machines based on the same principles as the human brain?
    On the data side: How can we minimize the human effort in labeling data and learn from increasingly weak forms of supervision? How to use synthetic data to substitute real-world data to avoid privacy and scalability issues?
    To conquer the above mentioned challenges, my research focuses on three different but highly connected and mutually supported dimensions: 1) Human-centric and trustworthy AI: Understand the Human-centric properties of AI models. Such as Causal Explainability, Robustness, Steerability, and Domain Adaptation. 2) Humanoid AI: Simulate human cognitive learning ability. Such as Imagination, Visual Reasoning, and Multi-modal learning (CLIP). 3) Data-centric (human-effortless) AI: Use synthetic data and neural renderer (NeRF, DALL-E, GAN, VAE) to solve real-world computer vision problems (classification, detection, segmentation) with minimal supervision.

    Committee members: Laurent Itti (Chair), Ram Nevatia, Greg Ver Steeg, Yan Liu, Nicolas Schweighofer.

    Zoom link (hybrid):
    Join Zoom Meeting
    https://urldefense.com/v3/__https://usc.zoom.us/j/2226620525__;!!LIr3w8kk_Xxm!7LMAWz4bNVcqh3rTNdNUzTTvIPvcuauvaTgibRKRuQQ3EFj0WhFfn6m-Ovz35rpK$
    Meeting ID: 222 662 0525

    Location: Hedco Pertroleum and Chemical Engineering Building (HED) - B15

    WebCast Link: https://urldefense.com/v3/__https://usc.zoom.us/j/2226620525__;!!LIr3w8kk_Xxm!7LMAWz4bNVcqh3rTNdNUzTTvIPvcuauvaTgibRKRuQQ3EFj0WhFfn6m-Ovz35rpK$

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Thesis Proposal - Zhaoheng Zheng

    Wed, Nov 30, 2022 @ 08:30 AM - 10:00 AM

    Thomas Lord Department of Computer Science

    University Calendar


    Ph.D. Candidate: Zhaoheng Zheng

    Topic: Incorporating Large-Scale Vision-Language Corpora in Visual Understanding


    Committee Chair: Prof. Ram Nevatia
    Committee Member: Prof. Keith Jenkins
    Committee Member: Prof. Jesse Thomason
    Committee Member: Prof. Greg Ver Steeg
    Committee Member: Prof. Mohammad Soleymani

    Abstract: Vision and language are key mediators through which humans interact with the external world or other members of society. One goal of artificial intelligence (AI) research is to create machines that can perceive the real world through multiple modalities. Previous research has shown remarkable progress in creating functional visual or linguistic perception systems with the help of deep neural networks. Recently, thanks to the advances of the Internet and social media, large-scale vision-language corpora can be easily accessed, motivating research that aims at creating large-scale Vision-Language Pre-training (VLP) models. Compared with previous methods, VLP models are stronger and more generalizable thanks to their data scale. In this thesis, we investigate the problem of how to leverage such data to boost existing visual understanding tasks. Particularly in FashionVLP, we propose to fine-tune a pre-trained VLP model for fashion image retrieval. More specifically, we fine-tune the model with customized input sequences containing various vision-language features, achieving significant improvements on multiple benchmarks. Moreover, we take a step further and explore better designs for VLP models to learn from large-scale corpora, resulting in our recent work, Fractional Intermediate Tower (FIT). FIT enhances the vision-language fusion process inside VLP models by encoding vision features from multiple vision layers before they are taken by the fusion encoder.

    WebCast Link: https://usc.zoom.us/j/95655803815?pwd=d3RrOXNrU2dVVE1sTkZpYXU3NWxEUT09

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File