A Practical Learning to Rank Approach for Smoothing DCG in Web Search Relevance

Discounted cumulative gain (DCG) is now widely used for measuring the performance of ranking functions especially in the context of Web search. It is therefore natural to learn a ranking function that directly optimizes DCG. However, DCG is non-smooth, rendering efficient gradient-based optimization algorithms inapplicable. To remedy this, smoothed versions of DCG have been proposed but with only partial success: they have yet to outperform other learning to rank algorithms using simple loss functions such as those based on pairwise preferences. In this talk, we first present analysis that shows it is ineffective using the gradient of the smoothed DCG to drive the optimization algorithm. We then propose a series of approaches that can significantly improve the optimization results of the smooth DCG cost function.

Speaker Details

Mingrui Wu is currently a senior scientist in Yahoo! Labs. His work is mainly concerned with web search relevance. And he has designed and implemented the ranking algorithms that have been widely deployed into current Yahoo search engines in US, UK and JP markets. Before joining Yahoo, he had worked as a research scientist in Max Planck Institute, working on machine learning and data mining related topics, such as kernel methods, data clustering, semi-supervised learning and collaborative filtering, etc. And he used to take a leading position in Netflix Prize Competition. Before that, he had worked as a research engineer in MASA Group, a high-tech company in Pairs, France. His job there was to apply machine learning technologies to real industrial applications, such as semi-conductor fault detection, fingerprint verification, target detection, etc.

Date:
Speakers:
Mingrui Wu
Affiliation:
Yahoo! Labs
    • Portrait of Jeff Running

      Jeff Running