Projecting the Future of Interaction

Published

Posted by Rob Knies

Imagine that you carry a small device that can make any nearby surface interactive—and that those surfaces can be manipulated via multitouch gestures and can store data.

“Wouldn’t that be cool?”

Spotlight: Event Series

Microsoft Research Forum

Join us for a continuous exchange of ideas about research in the era of general AI. Watch the first three episodes on demand.

The enthusiasm belongs to David Molyneaux, and he is one of several Microsoft Research Cambridge (opens in new tab) researchers striving to bring this fanciful vision to reality, using interactive, environmentally aware projector systems (opens in new tab) embedded in handheld devices.

Interactive images projected onto a wall (opens in new tab)“In the future,” Molyneaux predicts, “we will all have devices we carry around—maybe projectors integrated into mobile phones—that enable us to augment arbitrary surfaces and objects with digital content and relevant information. We will live in a 3-D ‘information space’ where objects, surfaces, and devices around us in the home or office can generate digital information or have it attached. These mobile devices will reveal this information and enable interaction with the information directly.”

That vision, in many senses, is shared among the augmented-reality community and could prove invaluable in scenarios such as gaming, workflows, and collaborative, ad hoc information work.

In a way, the Cambridge researchers’ project mirrors that of OmniTouch (opens in new tab), featured during the Association for Computing Machinery’s 24th Symposium on User Interface Software and Technology (opens in new tab), being held Oct. 16-19 in Santa Barbara, Calif. While both projects are mobile depth-sensing and projection systems, the main difference is that while OmniTouch only knows about objects and planar surfaces placed directly in front of it at close range, the Cambridge projector systems aim for high-fidelity awareness of the entire environment and interaction on any shaped surface.

The environmental awareness portion of the effort has three goals:

• Spatial awareness: The projector knows where it is in 3-D space as it moves around in real time.
• Geometry awareness: The projector knows which objects and surfaces are in an environment.
• Interaction: Users can use movement, gestures, and touch to interact with projected information.

This combination of awareness is key, as it then enables virtual content to be placed anywhere into the 3-D space and for it to appear projected in the correct location in the real world and undistorted for the user.

“We then investigate,” Molyneaux says, “what types of interactions are possible on top of these systems and develop new ways of interacting with these types of mobile displays of the future.”

The biggest challenge for this project has been in developing a mobile projector/camera system that can deliver high-fidelity environmental awareness and generate high-quality representations of that environment while simultaneously tracking the location of the projector.

The team developed both infrastructure-based and infrastructure-less systems that use Kinect depth sensors. The former uses ceiling-mounted Kinect cameras to sense a room and detect the locations of projectors and users, enabling whole-body sensing and interaction. The latter required the integration of multiple images from a handheld Kinect camera to build a model of the environment in real time—and led, in part, to the development of the KinectFusion (opens in new tab) 3-D reconstruction system.

What Molyneaux and his colleagues are doing amounts to nothing less than inventing the future. It’s an exhilarating obligation.

“I find this vision of more natural interaction with information really exciting,” Molyneaux says. “Rather than being stuck with a keyboard and mouse in front of a monitor, it’s bringing the content and interaction into the real world around us, so people can interact as they go about their daily lives.”