Understanding User Satisfaction with Intelligent Assistants

  • Julia Kiseleva ,
  • Kyle Williams ,
  • Jiepu Jiang ,
  • ,
  • Imed Zitouni ,
  • Aidan Crook ,
  • Tasos Anastasakos

The ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR 2016) |

Published by ACM

Publication

Voice-controlled intelligent personal assistants, such as Cortana, Google Now, Siri and Alexa, are increasingly becoming a part of users’ daily lives, especially on mobile devices. They allow for a radical change in information access, not only in voice control and touch gestures but also in longer sessions and dialogues preserving context, necessitating to evaluate their effectiveness at the task or session level. However, in order to understand which type of user interactions reflect different degrees of user satisfaction we need explicit judgements. In this paper, we describe a user study that was designed to measure user satisfaction over a range of typical scenario’s of use: controlling a device, web search, and structured search dialog. Using this data, we study how user satisfaction varied with different usage scenarios and what signals can be used for modeling satisfaction in different scenarios. We find that the notion of satisfaction varies across different scenarios and show that, in some scenarios (e.g. making a phone call), task completion is very important while for others (e.g. planning a night out), the amount of effort spent is key. We also study how the nature and complexity of the task at hand affect user satisfaction, and found that preserving the conversation context is essential and that overall task-level satisfaction cannot be reduced to query-level satisfaction alone. Finally, we shed light on the relative effectiveness and usefulness of voice-controlled intelligent agents, explaining their increasing popularity and uptake relative to the traditional query-response interaction.