Transfer Learning for User Adaptation in Spoken Dialogue Systems

  • Aude Genevay ,
  • Romain Laroche

Proceedings of the 15th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) |

This paper focuses on user adaptation in Spoken Dialogue Systems. It is considered that the system has already been optimised with Reinforcement Learning methods for a set of users. The goal is to use and transfer this prior knowledge to adapt the system to a new user as quickly as possible without impacting asymptotic performance. The first contribution is a source selection method using a multi-armed stochastic bandit algorithm in order to improve the jumpstart, i.e. the average performance at the start of the learning curve. Contrarily to previous source selection methods, there is no need to define a metric between users, and it is parameter free. The second contribution is an innovative method for selecting the most informative transitions within the previously selected source, to improve the target model, in such a way that only transitions that were not observed with the target user are transferred from the selected source. For our experimentation , Reinforcement Learning is performed with the Fitted Q-Iteration algorithm. Both methods are validated on a negotiation game: an appointment scheduling simulator that allows the definition of simulated user models adopting diversified behaviours. Compared to state-of-the-art transfer algorithms, results show significant improvements for both jumpstart and asymptotic performance.