Preview of ACM’s Multimedia Conference Oct 29 keynote address

Published

Zhengyou_Zhang

Zhengyou Zhang (opens in new tab), research manager and principal researcher at Microsoft Research, will present his team’s latest advances in immersive human-human telecommunications at ACM’s annual multimedia conference (opens in new tab) in Brisbane, Australia. The 2015 ACM Multimedia Conference runs from October 26-30. View the full conference program. (opens in new tab)

In an October 29 keynote address titled “Vision-enhanced Immersive Interaction and Remote Collaboration with Large Touch Displays,” Dr. Zhang will demonstrate how the Kinect-inspired technology enables remote teams to feel as if they were working together in the same room.

Microsoft Research Blog

Microsoft at CHI 2024: Innovations in human-centered design

From immersive virtual experiences to interactive design tools, Microsoft Research is at the frontier of exploring how people engage with technology. Discover our latest breakthroughs in human-computer interaction research at CHI 2024.

The importance and impact of such immersive experiences initially came to prominence in 2012 with the release of Dr. Zhang’s paper, Microsoft Kinect Sensor and Its Effect, (opens in new tab) published in the journal IEEE MultiMedia. It has since become one of the publication’s most downloaded papers, culminating in Dr. Zhang earning the journal’s 2015 IEEE Multimedia Best Department Article Award to add to his extensive list of honors.

Dr. Zhang, who leads the Multimedia, Interaction, and Experience (MIX) group at Microsoft Research, will give ACM conference attendees a close up-view of ViiBoard (Vision-enhanced Immersive Interaction with touch Board). The system—consisting of VTouch and ImmerseBoard features—enables “natural interaction and immersive remote collaboration with large touch displays by adding a commodity color plus depth sensor,” according to ACM conference notes.

  • VTouch uses an RGBD sensor such as Microsoft Kinect to understand where the user is, who the user is, and what the user is doing even before the user touches the display.
  • ImmerseBoard uses 3D processing of depth images, life-sized rendering, and novel visualizations to emulate writing side-by-side on either a physical whiteboard or mirror.

The net effect provides remote participants with “a quantitatively better ability to estimate their remote partners’ eye gaze direction, gesture direction, intention, and level of agreement.”

To date, only brief details of ViiBoard have been released, most notably from the following online videos:

More details about ViiBoard can be found in the following two recent conference papers:

ImmerseBoard’s form factor is described in conference notes as “suitable for practical and easy installation in homes and offices.” Public availability has yet to be announced.

—John Kaiser, Research News

For more computer science research news, visit ResearchNews.com (opens in new tab).

Continue reading

See all blog posts