Using Intermediate Representations to Solve Math Word Problems
- Danqing Huang ,
- Jin-Ge Yao ,
- Chin-Yew Lin ,
- Qingyu Zhou ,
- Jian Yin
Meeting of the Association for Computational Linguistics |
Published by Association for Computational Linguistics
PDF | Publication | Publication | Publication
To solve math word problems, previous statistical approaches attempt at learning a direct mapping from a problem description to its corresponding equation system. However, such mappings do not include the information of a few higher-order operations that cannot be explicitly represented in equations but are required to solve the problem. The gap between natural language and equations makes it difficult for a learned model to generalize from limited data. In this work we present an intermediate meaning representation scheme that tries to reduce this gap. We use a sequence-to-sequence model with a novel attention regularization term to generate the intermediate forms, then execute them to obtain the final answers. Since the intermediate forms are latent, we propose an iterative labeling framework for learning by leveraging supervision signals from both equations and answers. Our experiments show using intermediate forms outperforms directly predicting equations.