InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices

Proceedings of the ACM Conference on Human Factors in Computing Systems |

While tablet devices are a promising platform for data visualization, supporting consistent interactions across different types of visualizations on tablets remains an open challenge. In this paper, we present multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis. By considering standard interface elements (e.g., axes, marks) and grounding our design in a set of core concepts including operations, parameters, targets, and instruments, we systematically develop interactions applicable to different visualization types. To exemplify how the proposed interactions collectively facilitate data exploration, we employ them in a tablet-based system, InChorus that supports pen, touch, and speech input. Based on a study with 12 participants performing replication and fact-checking tasks with InChorus, we discuss how participants adapted to using multimodal input and highlight considerations for future multimodal visualization systems.

InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices

InChorus employs multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis. InChorus synergistically combines multiple forms of input (i.e., pen, touch, & speech) to allow people to stay in the flow and to complete their tasks more effectively by leveraging the strengths of one interaction modality to complement the weaknesses of others.