Composite Task-Completion Dialogue Policy Learning via Hierarchical Deep Reinforcement Learning

  • Baolin Peng ,
  • Xiujun Li ,
  • Lihong Li ,
  • ,
  • Asli Celikyilmaz ,
  • Sungjin Li ,
  • Kam-Fai Wong

Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing |

Building a dialogue agent to fulfill complex tasks, such as travel planning, is challenging because the agent has to learn to collectively complete multiple subtasks. For example, the agent needs to reserve a hotel and book a flight so that there leaves enough time for commute between arrival and hotel check-in. This paper addresses this challenge by formulating the task in the mathematical framework of options over Markov Decision Processes (MDPs), and proposing a hierarchical deep reinforcement learning approach to learning a dialogue manager that operates at different temporal scales. The dialogue manager consists of: (1) a top-level dialogue policy that selects among subtasks or options, (2) a low-level dialogue policy that selects primitive actions to complete the subtask given by the top-level policy, and (3) a global state tracker that helps ensure all cross-subtask constraints be satisfied. Experiments on a travel planning task with simulated and real users show that our approach leads to significant improvements over three baselines, two based on handcrafted rules and the other based on flat deep reinforcement learning.