WritLarge

Established: May 9, 2017

WritLarge is a prototype system from Microsoft Research for the 84″ Microsoft Surface Hub, a large electronic whiteboard supporting both pen and multi-touch input. WritLarge allows creators to unleash the latent expressive power of ink in a compelling manner.


Using multi-touch, the content creator can simply frame a portion of their ‘whiteboard’ session between thumb and forefinger, and then act on such a selection (such as by copying, sharing, organizing, or otherwise transforming the content) using the pen wielded by the opposite hand.

The pen and touch inputs thereby complement one another to afford a completely new—and completely natural—way of using freeform content to “ink at the speed of thought” on Microsoft’s line of Surface devices.

WritLarge makes it easy to select and act on content on large electronic whiteboards.

WritLarge enables creators to easily indicate content with one hand while acting on it with the other. This (for example) makes it easy to select specific ink strokes to recognize–or otherwise transform and re-structure–in a rich variety of ways.


Electronic whiteboards remain surprisingly difficult to use in the context of creativity support and design. A key problem is that once a designer places strokes and reference images on a canvas, actually doing anything useful with a subset of that content involves numerous steps. Hence, scope—that is, selection of content—is a central concern, yet current techniques often require switching modes and encircling ink with a lengthy lasso, if not round-trips to the edge of the display. Only then can the user take action, such as to copy, refine, or re-interpret content.

Such is the stilted nature of selection and action in the digital world. But it need not be so. By contrast, consider an everyday manual task such as sandpapering a piece of woodwork to hew off its rough edges. Here, we use our hands to grasp and bring to the fore—that is, select—the portion of the work-object—the wood—that we want to refine. And because we are working with a tool—the sandpaper—the hand employed for this ‘selection’ sub-task is typically the non-preferred one, which skillfully manipulates the frame-of-reference for the subsequent ‘action’ of sanding, a complementary sub-task articulated by the preferred hand.

Therefore, in contrast to the disjoint subtasks foisted on us by most interactions with computers, the above example shows how complementary manual activities lend a sense of flow that “chunks” selection and action into a continuous selection-action phrase. By manipulating the workspace, the off-hand shifts the context of the actions to be applied, while the preferred hand brings different tools to bear—such as sandpaper, file, or chisel—as necessary.

The main goal of WritLarge, then, is to demonstrate similar continuity of action for electronic whiteboards. This motivated free-flowing, close-at-hand techniques to afford unification of selection and action via bimanual pen+touch interaction. To address selection, we designed a lightweight, integrated, and fast way for users to indicate scope, called the Zoom-Catcher (shown above), as follows:

With the thumb and forefinger of the non-preferred hand, the user just frames a portion of the canvas.

This sounds straightforward, and it is—from the user’s perspective. But this simple reframing of pinch-to-zoom affords a transparent, toolglass-like palette—the Zoom-Catcher, manipulated by the nonpreferred hand—which floats above the canvas, and the ink strokes and reference images thereupon. The Zoom-Catcher elegantly integrates numerous steps: it dovetails with pinch-to-zoom, affording multi-scale interaction; serves as mode switch, input filter, and an illumination of a portion of the canvas—thereby doubling as a lightweight specification of scope; and once latched-in, it sets the stage for action by evoking commands at-hand, revealing context-appropriate functions in a location-independent manner, where the user can then act on them with the stylus (or a finger).

Recognizing select ink strokes in WritLarge

Recognizing select content in WritLarge. The content creator can easily select, and act on, only the specific ink strokes of interest. The recognized results then preserve the position, orientation, and baseline orientation that are naturally expressed in the ink.


Organizing free-form content into a grid

This example shows how content creators can easily select and organize select items into a grid layout.


Rewinding time on a select portion of the canvas

Likewise, content creators can rewind time for a select portion of the canvas, allowing earlier states of a sketch to be retrieved, for example.


Building from this key insight, our work contributes unified selection and action by bringing together the following:

  • Lightweight specification of scope via the Zoom-Catcher.
  • In a way that continuously dovetails with pinch-to-zoom.
  • Thus affording unified, multi-scale selection and action with pen+touch, and both hands, in complementary roles.
  • These primitives support flexible, interpretation-rich, and easily-reversible representations of content, with a clear mental model of levels spatially organized along semantic, structural, and temporal axes of movement.
  • Our approach thereby unleashes many natural attributes of ink, such as the position, size, orientation, textual content, and implicit structure of handwriting.
  • And in a way that leaves the user in complete control of what gets recognized—as well as when recognition occurs—so as not to break the flow of creative work.
  • A preliminary evaluation of the system with users suggests the combination of zooming and selection in this manner works extremely well, and is self-revealing for most users.

Collectively these contributions aim to reduce the impedance mismatch of human and technology, thus enhancing the interactional fluency between a creator’s ink strokes and the resulting representations at their command.


Key collaborators on this project include Haijun Xia (University of Toronto) and Xiao Tu (Microsoft).

People

Portrait of Ken Hinckley

Ken Hinckley

Senior Principal Research Manager

Portrait of Michel Pahud

Michel Pahud

Principal Research Software Development Engineer