LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Sparse Retrieval

  • Ziyang Luo ,
  • Pu Zhao ,
  • Can Xu ,
  • Xiubo Geng ,
  • Tao Shen ,
  • Chongyang Tao ,
  • Jing Ma ,
  • ,
  • Daxin Jiang

ICCV'23 |

Related File

Image-text retrieval (ITR) aims to retrieve images or texts that match a query originating from the other modality. The conventional dense retrieval paradigm relies on encoding images and texts into dense representations with dual-stream encoders. However, this approach is limited by slow retrieval speeds in large-scale scenarios. To address this issue, we propose a novel sparse retrieval paradigm for ITR that exploits sparse representations in the vocabulary space for images and texts. This paradigm enables us to leverage bag-of-words models and efficient inverted indexes, significantly reducing retrieval latency. A critical gap emerges from representing continuous image data in a sparse vocabulary space. To bridge this gap, we introduce a novel pre-training framework, Lexicon-Bottlenecked Language-Image Pre-Training (LexLIP) , that learns importance-aware lexicon representations. By using lexicon-bottlenecked modules be-tween the dual-stream encoders and weakened text decoders, we are able to construct continuous bag-of-words bottle-necks and learn lexicon-importance distributions. Upon pre-training with same-scale data, our LexLIP achieves state-of-the-art performance on two ITR benchmarks, MSCOCO and Flickr30k. Furthermore, in large-scale retrieval scenarios, LexLIP outperforms CLIP with 5 . 8 × faster retrieval speed and 19 . 1 × less index storage memory. Beyond this, LexLIP surpasses CLIP across 8 out of 10 zero-shot image classification tasks.