Digital Hair Manipulation Gets Dynamic

Published

Posted by Rob Knies

Spotlight: Event Series

Microsoft Research Forum

Join us for a continuous exchange of ideas about research in the era of general AI. Watch the first four episodes on demand.

A few strokes on an image puts the hair-manipulation process in place

Had your hair cut lately? Most of us probably can answer that one affirmatively. Use a brush or comb? Well, yeah, of course. Does your hair blow in the wind? Only when it’s windy.

Such simplistic questions might have you scratching your head. In real life, the appearance of human hair is edited regularly, either by the elements or by ourselves. It’s something so natural, so normal, that we don’t even think about it.

Lvdi Wang does, though. That’s because Wang, an associate researcher in the Internet Graphics Group at Microsoft Research Asia, has been working for the past year and a half on improving the appearance of hair in digital images, an enormously challenging task in technical terms.

This week, during SIGGRAPH 2013, the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, being held July 21-25 in Anaheim, Calif., Wang and his colleagues will be demonstrating their latest progress by delivering a paper called Dynamic Hair Manipulation in Images and Videos, co-authored by Zhejiang University’s Menglei Chai, Yanlin Weng, Xiaogang Jin, and Kun Zhou, along with Wang.

The paper outlines a new, single-view hair-modeling technique for generating visually and physically plausible 3-D hair models achieved with only modest user interaction. The work creates hair models that visually match an original input image.

“We proposed a new method for creating a 3-D hair model from just one single photograph or short video,” Wang explains. “Such a model contains tens of thousands of individual hair strands and allows the user to manipulate hair in images or videos in a structure-preserving and semantically meaningful way.”

In the real world, even a slight alteration to a person’s hair can expose new strands of hair, while others can become blocked from sight. A new image of the same person thus would not include a direct correspondence, at the pixel level, to the original.

Recent research on 3-D-aware image manipulation—including the SIGGRAPH 2012 paper Baining Guo of Microsoft Research Asia, and Zhou—proves that, by fitting proper 3-D proxies to objects of interest in such photos, semantically meaningful operations are possible that are almost impossible in the 2-D domain.

In the latest paper, the researchers apply the principal of “physical plausibility,” in which hair roots are fixed in the scalp of the person in an image, it remains smooth instead of exhibiting sharp bends, and the length and continuity of real strands of hair are preserved to the extent possible.

The user provides a few strokes atop in the original portrait, and the technology delivers a high-quality model possessing both visual fidelity and physical plausibility to enable effects of alternative combing strategies or motion-preserving hair replacement in video. Alternatively, a couple of deft strokes on the original results in a virtual haircut.

“To get the correct hair-editing results,” Wang says, “we must make sure the 3-D hair strands are indeed grown from the scalp of a 3-D hair model, so that when the user moves the head or combs the hair, the hair roots are always fixed on the scalp.

“This is the key to making ‘dynamic’ hair manipulation—changing the shapes of individual strands—possible. It also is one of the main technical challenges we have tackled.”

Wang and colleagues also have extended their model to address simple video input and generate dynamic 3-D hair models, enabling users to manipulate hair in a video or to transfer styles from images to videos.

“We are excited about the potential of our techniques to directly benefit a wide range of users,” Wang concludes. “This is due to the fact that, compared with traditional multiple-image-based solutions, our method dramatically reduces the requirements for the capture device, whether it is a hardware setup in a lab or a built-in camera in a user’s smartphone.”

Soon, it seems, it might not even be necessary to visit a hairdresser when you want to polish your social-media likeness. A few swipes to the image or video, a bit of computation, and you’ll be looking better than ever!