End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding
- Yun-Nung Vivian Chen ,
- Dilek Hakkani-Tür ,
- Gokhan Tur ,
- Jianfeng Gao ,
- Li Deng
Proceedings of The 17th Annual Meeting of the International Speech Communication Association (INTERSPEECH 2016) |
Published by ISCA
Spoken language understanding (SLU) is a core component of a spoken dialogue system. In the traditional architecture of dialogue systems, the SLU component treats each utterance independent of each other, and then the following components aggregate the multi-turn information in the separate phases. However, there are two challenges: 1) errors from previous turns may be propagated and then degrade the performance of the current turn; 2) knowledge mentioned in the long history may not be carried into the current turn. This paper addresses the above issues by proposing an architecture using end-to-end memory networks to model knowledge carryover in multi-turn conversations, where utterances encoded with intents and slots can be stored as embeddings in the memory and the decoding phase applies an attention model to leverage previously stored semantics for intent prediction and slot tagging simultaneously. The experiments on Microsoft Cortana conversational data show that the proposed memory network architecture can effectively extract salient semantics for modeling knowledge carryover in the multi-turn conversations and outperform the results using the state-of-the-art recurrent neural network framework (RNN) designed for single-turn SLU.