-
Modeling Human Motion Behaviors and 3D Environment from Real-World Capture
Tue, Oct 15, 2024 @ 04:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Andrew Feng , Associate Director - Geospatial Research, USC-ICT
Talk Title: Modeling Human Motion Behaviors and 3D Environment from Real-World Capture
Abstract: Synthesizing believable human motions based on input conditions is an essential task that will find many applications in gaming, simulation, and virtual reality. Various conditional inputs can be utilized to drive the motion synthesis process such as speech, music, action categories, and natural language text descriptions. Generating motions from text prompts or speech audios requires modeling of both languages and motions, which is especially challenging as the model needs to learn a cross-modal mapping to produce motion sequences. Another challenge in learning the motion synthesis model is that the cross-modal mapping may not be deterministic. For instance, there may be multiple viable gesture motions for the same speech utterance that are all plausible. The first part of this talk will cover our research in leveraging discrete latent space learning and recent generative modeling methods to address such challenges. Our proposed method models the motion segments as discrete codes and learns the underlying data distributions for these motion units. Therefore it does not suffer from the over-smoothed or damped animations caused by the deterministic mapping of the regression models in previous methods. Modeling the real world environment from multi-view images remain significant challenges in computer vision and graphics. The resulting models need to retain both accurate visual appearances and geometry to be valuable for digital twins, simulation, or scan-to-BIM applications. 3D Gaussian Splatting (3DGS) has recently advanced the field to be a viable method for novel view synthesis and real-time rendering. The second part of the talk will cover our recent research work in 3DGS for revising the training and densification strategy to improve the radiance field and geometry reconstructions.
This lecture satisfies requirements for CSCI 591: Research Colloquium.
Biography: Andrew Feng is currently the Associate Director of Geospatial Research at USC-ICT. He leads the Terrain Research group at ICT focusing on geospatial R&D initiatives in support of the Army’s One World Terrain project. Previously, he was a research scientist working on gesture synthesis, character animation and automatic 3D avatar generation. His research work involves applying machine learning techniques to solve computer graphics problems such as 3D model reconstructions, semantic segmentations, and animation synthesis. He received his Ph.D. and M.S. degree in computer science from the University of Illinois at Urbana-Champaign.
Host: Jonathan Gratch, Research Professor
Location: Olin Hall of Engineering (OHE) - 100c
Audiences: Everyone Is Invited