Logo: University of Southern California

Events Calendar



Select a calendar:



Filter December Events by Event Type:


SUNMONTUEWEDTHUFRISAT
15
16
19
20
21

22
23
24
25
26
27
28

29
30
31
1
2
3
4


Events for December 09, 2024

  • PhD Dissertation Defense - Haiwei Chen

    Mon, Dec 09, 2024 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Title: Designing Neural Networks from the Perspective of Spatial Reasoning
     
    Date: December 9, 2024
     
    Time: 2:00 pm - 4:00 pm 
     
    Location: GFS 104
     
    Committee: Yajie Zhao (Chair), Ramakant Nevatia, and Andrew Nealen
     
    Abstract: All visual data, from image to CAD models, live in a 2D or 3D spatial domain. In order to understand and model the visual data, spatial reasoning has always been fundamental to computer vision algorithms. Naturally, the practice has been widely extended to the use of artificial neural networks built for visual analysis. The basic building blocks of a neural network - operators and representations - are means to learn spatial relationships and therefore are built with spatial properties. In this thesis, we present the designs of ``spatial-aware'' neural operators and representations in different application contexts, with a unique focus on how these design choices affect the spatial properties of the neural networks in a way that is beneficial for the tasks at hand. The first topic explored is the equivariance property, where a SE(3) equivariant convolutional network is designed for 3D pose estimation and scene registration. In this chapter, we show that the equivariant property of a convolutional neural network can be practically extended to higher dimensional space and proved highly effective for applications that are not only sensitive to translation, but also 3D rotations. The second topic explored is learning neural operators that approximate spatially continuous function in a pattern synthesis application context. In this chapter, we explore novel representations of periodic encoding and a continuous latent space for a generative network that is able to synthesize diverse, high-quality and continuous 2D and 3D patterns. The unique formulation allows the generative model to be at least 10 times faster and more memory efficient compared to previous efforts, and marked one of the earliest attempts to adopt the implicit network to the generative setting. The third topic explored is spatial awareness with regard to incomplete images, where a generative network model for image inpainting is designed based on restricting its receptive field. Combined with the generative transformer and the discrete latent codes, this novel paradigm demonstrates the effectiveness of separating analysis and synthesis in challenging image inpainting scenarios, as the resulted network model achieves state-of-the-art performance in both diversity and quality, when completing partial images with free-form holes occupying as large as 70\% of the image. I believe that the topics covered have contributed to a better understanding of neural operator and representation designs for both discriminative and generative learning in computer vision, from a perspective of identifying the effective ways of spatial reasoning for the targeted visual applications. 

    Location: Grace Ford Salvatori Hall Of Letters, Arts & Sciences (GFS) - 104

    Audiences: Everyone Is Invited

    Contact: Ellecia Williams

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • World models beyond autoregressive next state prediction

    World models beyond autoregressive next state prediction

    Mon, Dec 09, 2024 @ 03:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering, Thomas Lord Department of Computer Science, USC School of Advanced Computing

    Conferences, Lectures, & Seminars


    Speaker: Abhishek Gupta, Ph.D., Assistant Professor of Computer Science and Engineering, Paul G. Allen School at the University of Washington

    Talk Title: World models beyond autoregressive next state prediction

    Series: CSC@USC/CommNetS-MHI Seminar Series

    Abstract: Learned models of system dynamics provide an appealing way of predicting the future outcomes in a system, enabling downstream usage for planning or off-policy evaluation in applications such as robotics. However, the prevalent paradigm of autoregressive, next-state prediction in learning dynamics models is challenging to scale to environments with high dimensional observations and long horizons. In this talk, I will present alternative techniques for model learning that go beyond directly predicting next states. Firstly, we will discuss a reconstruction-free class of models that go beyond next-observation prediction by learning the evolution of task-directed latent representations for high dimensional observation spaces. We will then show how this can be generalized to learning a new class of models that avoid autoregressive prediction altogether by directly modeling long-term cumulative outcomes, while remaining task agnostic. In doing so, this talk will propose alternative ways of thinking about model learning that retain the benefits of transferability and efficiency from model-based RL, while going beyond next-state prediction.

    Biography: Abhishek Gupta is an assistant professor of computer science and engineering at the Paul G. Allen School at the University of Washington. Prior to joining University of Washington, he was a post-doctoral scholar at MIT, collaborating with Russ Tedrake and Pulkit Agarwal. He completed his Ph.D. at UC Berkeley working with Pieter Abbeel and Sergey Levine, building systems that can leverage reinforcement learning algorithms to solve robotics problems. He is interested in research directions that enable directly performing reinforcement learning directly in the real world — reward supervision in reinforcement learning, large scale real world data collection, learning from demonstrations, and multi-task reinforcement learning. He has also spent time at Google Brain. He is a recipient of the NDSEG and NSF graduate research fellowships, and several of his works have been presented as spotlight presentations at top-tier machine learning and robotics conferences.

    Host: Erdem Biyik

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248

    Audiences: Everyone Is Invited

    Contact: Erdem Biyik

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File