Sunrise over the town of St Kilda and city of Melbourne, Australia
February 11, 2019 - February 15, 2019

Microsoft @ WSDM 2019

Location: Melbourne, Australia

Register

Industry Day (opens in new tab)

A Case Study on Microsoft’s Ruuh.ai: Is User Growth a Peril to Research Progress?

Monday, February 11 | 11:00 AM–12:30 PM
Puneet Agrawal

Striking a balance between business goals such as user growth and deep meaningful research is always a challenging task in an industrial research setting. In this talk, taking Microsoft’s Ruuh as a case study, we will discuss the challenges and opportunities in the industry when it comes to research. Microsoft’s Ruuh was conceptualized about 2.5 years back and the main product promise of Ruuh is to be able to talk to its users on any subject they choose. We realized that the promise meant thinking beyond the utilitarian notion of merely generating “relevant” responses and enabling Ruuh to comprehend and meet a wider range of user social needs, like expressing happiness when user’s favorite team wins, sharing a cute comment on showing the pictures of the user’s pet and so on. At the onset, this seems an impossible task to achieve coupled with aggressive release deadline and pressure to grow usage. However, in this talk we will discuss how our research progress helped the user growth and vice versa, and also discuss scenarios where we suffered setbacks. A good quality product leads to high usage which in turn provides the much-needed data to improve the research and understand the flaws in the current approach. At the same time, high usage of the product forces the team to focus on the efficiency, cost per query and other infrastructure related workloads. This talk will take real-world examples and explain these tradeoffs. More details of the talk are presented in last section.

‘No Interaction’ as Indicator of Search Satisfaction: Accounting for Good Abandonment in User Success Metrics

Monday, February 11 | 11:00 AM–12:30 PM
Widad Machmouchi

At Bing, measuring user success has always been a deciding factor as to which feature or change is shipped to production. Testing such changes is carried out through randomized controlled experiments, where success metrics are used to measure the treatment effect on user satisfaction. Over the years, we have designed and refined our metrics to capture various user interactions, from search queries to clicks and hovers, and interpreted them to predict users’ satisfaction with the search engine. One of the main scenarios that is hard to interpret is search result page abandonment, where the user doesn’t click on the page or interact with any specific element. In this scenario of abandonment, we need to differentiate cases where the user abandoned due to getting the information they need without clicking on any results, from those where the user abandoned due to a defective and/or unsatisfactory search result page. In this talk, we outline Bing’s journey in addressing this measurement problem. We talk about our initial effort of considering the presence of specific elements on the page as indicator of success; to our offline/online hybrid approach to identify good abandonment; and finally, to a fully-online solution that relies on a user’s behavior across their search session. We also cover the pitfalls of the different approaches, how we evaluate them and the current challenges and problems left to solve.