What Matters in a Measure? A Perspective from Large-Scale Search Evaluation
- Paul Thomas ,
- Gabriella Kazai ,
- Nick Craswell ,
- Seth Spielman
2024 International ACM SIGIR Conference on Research and Development in Information Retrieval |
Information retrieval (IR) has a large literature on evaluation, dating back decades and forming a central part of the research culture. The largest proportion of this literature discusses techniques to turn a sequence of relevance labels into a single number, reflecting the system’s performance: precision or cumulative gain, for example, or dozens of alternatives. Those techniques—metrics—are themselves evaluated, commonly by reference to sensitivity and validity.
In our experience measuring search in industrial settings, a measurement regime needs many other qualities to be practical. For example, we must also consider how much a metric costs; how robust it is to the happenstance of sampling; whether it is debuggable; and what activities are incentivised when a metric is taken as a goal.
In this perspective paper we discuss what makes a search metric successful in large-scale settings, including factors which are not often canvassed in IR research but which are important in “real-world” use. We illustrate this with examples, including from industrial settings, and offer suggestions for metrics as part of a working system.