Logo: University of Southern California

Creating a Virtual World

Ulrich Neumann, Suya You and team work to turn 3-D point clouds into interactive models
By: Katie McKissick
August 19, 2013 —

Imagine a future where you scan a room with your cell phone’s camera and software changes the elements of the photo into a complete 3-D, interactive model. All the separate features in the space--the furniture, the objects and the people in it--are turned into 3-D models as well. That means you could select a feature such as a table and manipulate it. You could move it, rotate it, or remove it from the simulated space.

Now imagine creating 3-D models of complex machinery, whole buildings and even entire cities.

Research in this area of 3-D computer modeling combines scanning technology, object recognition and computer-aided design.

Student works with a 3-D model of industrial equipment. Photo by Nikki David

Professors Ulrich Neumann and Suya You in the USC Viterbi School of Engineering Computer Science Department research 3-D model generation and application in the Computer Graphics and Immersive Technologies Laboratory.

How does it work?

Generating an interactive model begins with a lidar camera, a name coined by combining the words light and radar. Lidar cameras use lasers to sense the distances from each point in a given space. This generates a 3-D point cloud.

The point cloud itself shows every facet of a space or object, but turning those myriad points into recognizable and interactive objects is a challenge. A human being can look at a space and immediately know which points correspond to a table, which points make up the wall and floor, and what surfaces are attached to chairs. But a computer doesn’t have that same familiarity.

Computers are challenged to make sense of a scene most people take for granted. 

Neumann and You, along with their graduate students, must write algorithms that allow a computer to automatically deconstruct a point cloud into its distinct elements.

Matching Cat 1
3-D point cloud matching points between different poses of a cat. Image courtesy of Jing Huang

“At some basic level,” said Neumann, “we’re trying to label things and carve them out of the point cloud and say, ‘This set of points makes a door. This set of points makes a table. This set of points is a person or an object.’”

A system automatically extracts and reconstructs pipe-runs from industrial site scans. The picture shows models that are color coded by type of pipe. Image courtesy of Rongqi Qiu.

As this technology advances, we near a future where we can model our surroundings and ourselves, creating a virtual reality where we can see things from new angles and test various possibilities.

The applications of 3-D interactive computer graphics span many areas: facial recognition, infrastructure modeling, city planning and untold future commercial uses.

The beginnings of this technology are already being used in some ways, such as glasses.com’s iPad app that allows users to create a 3-D model of their face so they can virtually try on the full inventory of available glasses frames.

An iPad app allows users to create 3-D models of their head in order to virtually try on frames.

But 3-D models will provide much more than a virtual shopping experience. They will give us new insight into our world and allow us to see things from a totally different vantage point. 


Virtual USC from USC Viterbi on Vimeo.

A 3-D model of the USC Campus and surrounding area created by Professor Ulrich Neumann and graduate student Zhenzhen Gao at the Viterbi School of Engineering Computer Graphics and Immersive Technology lab.