Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval [IEEE Signal Processing Society 2018 Best Paper award]

  • Hamid Palangi ,
  • Li Deng ,
  • Yelong Shen ,
  • ,
  • Xiaodong He ,
  • Jianshu Chen ,
  • Xinying Song ,
  • Rabab Ward

IEEE/ACM Transactions on Audio, Speech, and Language Processing | , Vol 24: pp. 694

Publication

This paper develops a model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks (RNN) with Long Short-Term Memory (LSTM) cells. The proposed LSTM-RNN model sequentially takes each word in a sentence, extracts its information, and embeds it into a semantic vector. Due to its ability to capture long term memory, the LSTM-RNN accumulates increasingly richer information as it goes through the sentence, and when it reaches the last word, the hidden layer of the network provides a semantic representation of the whole sentence. In this paper, the LSTM-RNN is trained in a weakly supervised manner on user click-through data logged by a commercial web search engine. Visualization and analysis are performed to understand how the embedding process works. The model is found to automatically attenuate the unimportant words and detect the salient keywords in the sentence. Furthermore, these detected keywords are found to automatically activate different cells of the LSTM-RNN, where words belonging to a similar topic activate the same cell. As a semantic representation of the sentence, the embedding vector can be used in many different applications. These automatic keyword detection and topic allocation abilities enabled by the LSTM-RNN allow the network to perform document retrieval, a difficult language processing task, where the similarity between the query and documents can be measured by the distance between their corresponding sentence embedding vectors computed by the LSTM-RNN. On a web search task, the LSTM-RNN embedding is shown to significantly outperform several existing state of the art methods. We emphasize that the proposed model generates sentence embedding vectors that are specially useful for web document retrieval tasks. A comparison with a well known general sentence embedding method, the Paragraph Vector, is performed. The results show that the proposed method in this paper significantly outperforms Paragraph Vector method for web document retrieval task.

Two notes about this work:

  1. This work was done before LSTMs got popular for NLP in summer 2014. It was one of the first attempts to use LSTMs for sentence embedding and specially for information retrieval.
  2. The derivation and implementation of the LSTM model for this work was done from scratch in C# and CUDA.