Towards Robust Ranker for Text Retrieval

  • Yucheng Zhou ,
  • Tao Shen ,
  • ,
  • Chongyang Tao ,
  • ,
  • Guodong Long ,
  • Binxing Jiao ,
  • Daxin Jiang (姜大昕)

ACL 2023 |

A ranker plays an indispensable role in the de facto ‘retrieval & rerank’ pipeline, but its training still lags behind — learning from moderate negatives or/and serving as an auxiliary module for a retriever. In this work, we first identify two major barriers to a robust ranker, i.e., inherent label noises caused by a well-trained retriever and non-ideal negatives sampled for a high-capable ranker. Thereby, we propose multiple retrievers as negative generators improve the ranker’s robustness, where i) involving extensive out-of-distribution label noises renders the ranker against each noise distribution, and ii) diverse hard negatives from a joint distribution are relatively close to the ranker’s negative distribution, leading to more challenging thus effective training. To evaluate our robust ranker (dubbed R2anker), we conduct experiments in various settings on the popular passage retrieval benchmark, including BM25-reranking, full-ranking, retriever distillation, etc. The empirical results verify the new state-of-the-art effectiveness of our model.