DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation
- Yizhe Zhang ,
- Siqi Sun ,
- Michel Galley ,
- Yen-Chun Chen ,
- Chris Brockett ,
- Xiang Gao ,
- Jianfeng Gao ,
- JJ (Jingjing) Liu ,
- Bill Dolan
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems.
Github link: https://github.com/microsoft/dialogpt (opens in new tab)