Significant Improvements over the State of the Art? A Case Study of the MS MARCO Document Ranking Leaderboard

Proceedings of the 44th international ACM SIGIR conference on Research & development in information retrieval |

Published by ACM

Leaderboards are a ubiquitous part of modern research in applied machine learning. By design, they sort entries into some linear order, where the top-scoring entry is recognized as the “state of the art” (SOTA). Due to the rapid progress being made in information retrieval today, particularly with neural models, the top entry in a leaderboard is replaced with some regularity. These are touted as improvements in the state of the art. Such pronouncements, however, are almost never qualified with significance testing. In the context of the MS MARCO document ranking leaderboard, we pose a specific question:\ How do we know if a run is {\it significantly} better than the current SOTA? We ask this question against the backdrop of recent IR debates on scale types:\ in particular, whether commonly used significance tests are even mathematically permissible. Recognizing these potential pitfalls in evaluation methodology, our study proposes an evaluation framework that explicitly treats certain outcomes as distinct and avoids aggregating them into a single-point metric. Empirical analysis of SOTA runs from the MS MARCO document ranking leaderboard reveals insights about how one run can be “significantly better” than another that are obscured by the current official evaluation metric (MRR@100).

Publication Downloads

MS MARCO

May 2, 2019

MS MARCO is a collection of datasets focused on deep learning in search. The first dataset was a question answering dataset featuring 100,000 real Bing questions and a human generated answer. Since then we released a 1,000,000 question dataset, a natural langauge generation dataset, a passage ranking dataset, keyphrase extraction dataset, crawling dataset, and a conversational search.