Intelligent Selection of Language Model Training Data

  • Robert C. Moore ,
  • Will Lewis

Proceedings of the ACL 2010 Conference Short Papers |

Published by Association for Computational Linguistics

Publication

We address the problem of selecting nondomain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the entropy according to domain-specific and non-domain-specifc language models for sentences of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than either random data selection or a previous method based on measuring perplexity according to a domain-specific language model.