Targeted Adversarial Training for Natural Language Understanding

NAACL |

We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the model errs the most. Experiments show that TAT can significantly improve accuracy over standard adversarial training on GLUE and attain new state-of-the-art zero-shot results on XNLI. Our code will be released at: https://github.com/namisan/mt-dnn.

Publication Downloads

Multi-Task Deep Neural Networks for Natural Language Understanding (MT-DNN)

July 16, 2019

Multi-task learning toolkit for natural language understanding, including knowledge distillation.