CIKM: “Slow Search with People” highlights welcoming keynote

Published

Making search better by slowing it down will be explored in the welcoming keynote when CIKM convenes in Melbourne, Australia this week.

In “Slow Search: Improving Information Retrieval Using Human Assistance,” Principal Researcher Jaime Teevan will share some of the latest findings coming out of Microsoft Research’s Context, Learning, and User Experience for Search (opens in new tab) group.

The 24th ACM International Conference on Information and Knowledge Management (opens in new tab) from Oct. 19-23 brings together leading researchers in the disciplines of information retrieval, knowledge management and databases.

Ideas: Exploring AI frontiers with Rafah Hosn

Energized by disruption, partner group product manager Rafah Hosn is helping to drive scientific advancement in AI for Microsoft. She talks about the mindset needed to work at the frontiers of AI and how the research-to-product pipeline is changing in the GenAI era.

The keynote, to be delivered Tuesday, focuses “on how search engines can make use of additional time to employ a resource that is inherently slow: other people,” Teevan states in conference notes. “Using crowdsourcing and friendsourcing, I will highlight opportunities for search systems to support new search experiences with high quality result content that takes time to identify.”

The “Slow Search with People” initiative began in 2013 and has since been quietly gaining momentum in a bid to meld the scale and speed of machine intelligence with the quality and depth of analysis from real people.

The effort requires taking a somewhat contrarian approach to the ubiquitous task of finding information within the context of the desire for instant search results.  Incredibly, Teevan notes, that even a 100-200 millisecond delay is enough to trigger significant user dissatisfaction. It’s no accident that Google recently exchanged their long-cherished logo for a sans-serif version that loads faster across multiple devices, she notes.

Although strictly algorithmic search reliably delivers quick answers to simple questions, getting quality results to complex inquiries in economics, psychology, and other fields has proven more elusive.

But what if some of the laborious and repetitive attempts to gain better answers from search could be outsourced to the crowd or simply “friendsourced” on social media? It’s this prospect of freeing up an organization’s most valuable talent to focus on the truly hard stuff that helps propel “Slow Search” forward.

“Using the crowd is a good place to start because we can think about what we might be able to do algorithmically in the future (maybe even fast), like a giant Wizard-of-Oz experiment,” Teevan says.

It can help address “really complex things that require deep understanding …to explore things we can’t yet do algorithmically.”

The crowd, which now mostly refers to the “turkers” on Amazon’s Mechanical Turk (opens in new tab) (MTurk) have quickly become the dominant worker bees for tasks that machines can’t do well such as describing an image or choosing an ad preference. Much of the research presented throughout the week at CIKM could well rely on the results of tasks performed on MTurk.

The method has won many fans including cognitive psychologist and Microsoft researcher Dan Goldstein, who recently called it “one of the most important and beneficial innovations in the history of psychology,” according to the Financial Times. The speed of the research enables far more rapid progress and, because MTurk is so cheap, much larger samples can be used, Goldstein explained. But if the title of the Financial Times article by “Undercover Economist” Tim Harford is any indication – “Should we trust the young Turkers? (opens in new tab),” some issues remain to be fully worked out.

Likewise, Teevan warns of the downsides of relying on crowdsourcing alone, pointing out that her team’s own research into crowdsourcing shows how results can be easily manipulated or distorted.  “What is the risk of the crowd being used in a coordinated manner to force a system to come up with the wrong outcome?,” Teevan asks.

If the risks of crowdsourcing alone prove too high, that’s where friendsourcing comes in.

Queries to actual friends on social media are more likely to generate highly personalized results and can even inspire moments of near heroism like the person Teevan cites who responded to a friend’s question by typing up his grandmother’s handwritten recipe, creating an “entirely new piece of content.”

That’s one way to beat the strictly algorithmic search engines.

teevan-headshot1 (opens in new tab)Jaime Teevan (opens in new tab) is a Principal Researcher at Microsoft Research (opens in new tab) in the Context, Learning, and User Experience for Search (CLUES (opens in new tab)) group, and an Affiliate Assistant Professor (opens in new tab) in the Information School (opens in new tab) at the University of Washington (opens in new tab). Working at the intersection of human computer interaction, information retrieval, and social media, she studies and supports people’s information seeking activities. Jaime is best known for her research on personalized search, and she developed the first personalized search algorithm used by Microsoft’s Bing (opens in new tab) search engine. The MIT Technology Review (opens in new tab) recognized Jaime’s pioneering work by naming her one of 2009’s “35 Innovators Under 35 (opens in new tab),” and the CRA-W (opens in new tab) honored her in 2014 with the Borg Early Career Award (opens in new tab).

—John Kaiser, Research News

For more computer science research news, visit ResearchNews.com (opens in new tab).