Can generative LLMs create query variants for test collections? An exploratory study

  • Marwah Alaofi ,
  • Luke Gallagher ,
  • Mark Sanderson ,
  • Falk Scholer ,

2023 International ACM SIGIR Conference on Research and Development in Information Retrieval |

Published by ACM Press | Organized by ACM

DOI

This paper explores the utility of a Large Language Model (LLM) to automatically generate queries and query variants from a description of an information need. Given a set of information needs described as backstories, we explore how similar the queries generated by the LLM are to those generated by humans. We quantify the similarity using different metrics and examine how the use of each set would contribute to document pooling when building test collections. Our results show potential in using LLMs to generate query variants. While they may not fully capture the wide variety of human-generated variants, they generate similar sets of relevant documents, reaching up to 71.1% overlap at a pool depth of 100.