Logo: University of Southern California

Fast Capture Makes You The Avatar

With a $100 device, a USC Viterbi team can put you in a 3D video game
By: Rosalie Murphy
April 10, 2014 —


Our avatars are everywhere. In video games, we may rock a mohawk or carry a machine gun; on social media, we probably prefer semi-professional photos. They may or may not be accurate, but we rely on little cartoon faces, characters or icons above their heads to represent us.

Soon we may no longer need to create those images. Using Fast Avatar Capture technology, developed this year by a USC Viterbi team, avatars aren’t just videos or animations with hair and skin that looks like ours. They'll actually be us – or at least 3D models that walk and talk just like us.
 

Get into the Game: Scan Your Own Avatar...in Minutes from USC Viterbi on Vimeo.

“The idea is to capture more and more of a person and what makes them unique,” said Ari Shapiro, the Fast Avatar Capture project leader and a researcher at the USC Institute for Creative Technologies (ICT). “Are they fearful, confident, twitchy? As we build better models of their behavior, we can bring in more of that into this virtual character. It might even socialize like them.”
 
The software developed by Shapiro, with Evan Suma and Gerard Medioni of the Department of Computer Science, uses a Microsoft Kinect to scan a player’s body from four angles. Then, it assembles the scans to construct a 3-D model of the player inside the video game.
 
“I saw a demo at CES [the Consumer Electronics Summit] this year, reconstructing an avatar using over 40 high-res cameras. It took a couple hours to generate. It made the news because the idea is to put you, as you appear, into a simulation,” Suma said. “We’re doing it with a single Kinect sensor that costs $100 and is already widely available in millions of people’s living rooms.”
 
Fast Avatar Capture started when Suma created a 3-D image of another researcher simply sitting in a chair. Shapiro asked if he could do the same for someone standing up without blur. Then Medioni joined the project. He had developed some 3-D capture technology using multiple cameras. But the Kinect, which can point upward and downward, can capture a whole person standing close to the camera.
 
“As you turn, there’s no way to have exactly the same posture, so people said there’s no way to get an accurate reconstruction,” Medioni said. He proved those people wrong. The camera captures a player from four angles and his software seamlessly integrates them into a single 3D model.
 
Next, Suma developed linking algorithms to arrange Medioni’s scans into 3-D figures. In order to be animated, figures had to be certain sizes, shapes and qualities; in order to be reconstructed in 3-D, scans had to be clear, high-quality images. His work created the end-to-end pipeline that lets Fast Avatar Capture work so quickly.
 
“If you’re a user in your living room, you want to be able to walk up to the camera, scan in, and have it bring you into the virtual world as you appear. You want this to happen seamlessly and fast,” Suma said. “Building that automatic pipeline was a non-trivial challenge.”
 
Finally, Shapiro animated the figures using ICT’s SmartBody technology, which can observe a user’s unique physical behaviors and transfer them into a character. These avatars actually imitate the way people move – their strides, their resting poses and, someday, even the way their faces emote when they speak.
 
“Ideally we want them to be suitable for face-to-face interaction,” Shapiro said. “Characters right now are suitable for distance viewing, but there’s not a lot of detail in their faces.”
 
The Kinect 2 is on the market now, and the team thinks improved hardware will allow even more accurate models, produced even faster.
 
Already, though, their collaboration has produced technology that could change the way we play games and interact with each other online. Medioni developed his first 3-D capture technology to allow people to try on clothes at home, for example. Beyond video games, Fast Avatar Capture could insert soldiers and their teams into training simulations or allow executives oceans away to have more and more personal video conferences.
 
But the most useful applications, they believe, will come from users themselves.
 
“I believe we're the first people to put together an end-to-end solution. We’ve had all this infrastructure for bringing a character to life, but we’ve combined all these technologies and put them together to really reduce the cost,” Shapiro said. “Commodity technology can have real implications. When everybody can use it, people are going to come up with really creative ways to use it."