Discourse-Aware Neural Rewards for Coherent Text Generation

  • Antoine Bosselut ,
  • Asli Celikyilmaz ,
  • Xiaodong He ,
  • ,
  • Po-Sen Huang ,
  • Yejin Choi

2018 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT 2018) |

In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text. In particular, we propose to learn neural rewards to model cross-sentence ordering as a means to approximate desired discourse structure. Empirical results demonstrate that a generator trained with the learned reward produces more coherent and less repetitive text than models trained with cross entropy or with reinforcement learning with commonly used scores as rewards.