Evaluating Paraphrastic Robustness in Textual Entailment Models

  • Dhruv Verma ,
  • Yash Kumar Lal ,
  • Shreyashee Sinha ,
  • ,
  • Adam Poliak

ACL 2023 |

We present PaRT E, a collection of 1,126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across inputs that share the same meaning. We use the evaluation set to determine if RTE models’ predictions change when examples are paraphrased. In our experiments,
contemporary models change their predictions on 8-16% of paraphrased examples, indicating that there is still room for improvement.