Logo: University of Southern California

Big Data for 3-D Hair Scanning

Hao Li uses machine learning to build 3-D models of hair
by: Katie McKissick
March 26, 2015 —
Screen Shot 2015-02-26 At 2.42.23 PM
 Image courtesy of Hao Li

Creating digital human beings is right now the stuff of big, blockbuster movies, but soon the power to create your own highly realistic digital double will be in your hands, in part because of researchers like Hao Li, assistant professor in the Department of Computer Science. His interest in the future of computer graphics, virtual reality and augmented reality have had him digitizing people in real time, and right now, he’s working on hair.

“Hair is one of the hardest things to model in computer graphics,” Li said. In movies with complex hair like “Brave” or “Frozen,” artists have to define every strand and each curl, and that’s not even the animation of the hair and how it moves, just the shape. But hair is one of the basic features about us that makes us look like who we are, so it’s important to get it right.

Li’s lab first worked on algorithms that could render a 3-D model of loose, unconstrained hair, and now they moved onto more complex situations for hair capture: braids.

“I had to learn about all the braids,” said Li. “We were all struggling with how to do them.” Li and his team watched YouTube video tutorials to learn how to do all the different styles of braids—simple, French, Dutch, fishtail, waterfall, the list goes on.

The variation in braid styles is just part of the difficulty in modeling them. They can be all different thicknesses and levels of neatness. For the computer to automatically render a digital braid based on input photos, it needs to extract a few properties and then compare them to a database of braid possibilities to find the closest match.

“We have a mathematical function that actually describes all combinatorial possibilities of how braids could be generated,” said Li.

Screen Shot 2015-02-26 At 2.40.46 PM
Image courtesy of Hao Li

At first, Li and his team captured images of these braids in a setting with 50 cameras suspended from scaffolding surrounding the subject, but Li streamlined the process so it could be done with simpler hardware: a single XBOX Kinect. It uses 2-D orientation field analysis, looking at the hair and seeing how it’s flowing. Li’s algorithms also work independently of how the captured images are lit, since that can vary so much.

Next, Li wants to generate 3-D models of hair from smartphone selfies—even ones where part of the person’s hair is obstructed or out of frame. Soon, with just a smartphone, everyone will be able to construct personal 3-D digital doubles.

“The big impact would be for an ordinary person to have access to these high quality 3-D models to create an avatar of themselves they could play a game with,” said Li.