Leading Conversational Search by Suggesting Useful Questions

The Web Conference 2020 (formerly WWW conference) |

Publication

This paper studies a new scenario in conversational search, conversational question suggestion, which leads search engine users to more engaging experiences by suggesting interesting, informative, and useful follow-up questions. We first establish a novel evaluation metric, usefulness, which goes beyond relevance and measures whether the suggestions provide valuable information for the next step of a user’s journey. We construct a benchmark dataset for useful question suggestion which we seek to release publicly. Then we develop two suggestion systems, a BERT ranking model and a GPT-2 generation model, both trained in a multi-task learning framework. To make suggestions grounded in users’ information-seeking trajectories, one of the tasks brings in novel weak supervision training signals that convey past users’ search behaviors in search sessions. Using a novel embedding-based technique, we identify more coherent sessions in which users express salient information needs, and then inject them into our suggestion models to imitate how users transition to the next step of their session. Our offline experiments demonstrate the crucial role our “next-turn” inductive training plays in improving usefulness over a strong online system. Our online A/B test shows that our more useful question suggestions receive 8% more user clicks than the previous system.