Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval

  • Lee Xiong ,
  • Chenyan Xiong ,
  • Ye Li ,
  • Kwok-Fung Tang ,
  • Jialin Liu ,
  • Paul Bennett ,
  • Junaid Ahmed ,
  • Arnold Overwijk

International Conference on Learning Representations (ICLR) |

Publication | Related File

Conducting text retrieval in a dense learned representation space has many intriguing advantages over sparse retrieval. Yet the effectiveness of dense retrieval (DR) often requires combination with sparse retrieval. In this paper, we identify that the main bottleneck is in the training mechanisms, where the negative instances used in training are not representative of the irrelevant documents in testing. This paper presents Approximate nearest neighbor Negative Contrastive Estimation (ANCE), a training mechanism that constructs negatives from an Approximate Nearest Neighbor (ANN) index of the corpus, which is parallelly updated with the learning process to select more realistic negative training instances. This fundamentally resolves the discrepancy between the data distribution used in the training and testing of DR. In our experiments, ANCE boosts the BERT-Siamese DR model to outperform all competitive dense and sparse retrieval baselines. It nearly matches the accuracy of sparse-retrieval-and-BERT-reranking using dot-product in the ANCE-learned representation space and provides almost 100x speed-up.

Publication Downloads

Approximate nearest neighbor Negative Contrastive Estimation (ANCE)

April 23, 2021

A novel embedding training algorithm leveraging ANN search and achieved SOTA retrieval on Trec DL 2019 and OpenQA benchmarks.