Speech Utterance Classification Model Training without Manual Transcriptions

  • Ye-Yi Wang ,
  • John Lee ,
  • Alex Acero

IEEE International Conference on Acoustics, Speech and Signal Processing |

Published by Institute of Electrical and Electronics Engineers, Inc.

Speech utterance classification has been widely applied to a variety of spoken language understanding tasks, including call routing, dialog systems, and command and control. Most speech utterance classification systems adopt a data-driven statistical learning approach, which requires manually transcribed and annotated training data. In this paper we introduce a novel classification model training approach based on unsupervised language model adaptation. It only requires wave files of the training speech utterances and their corresponding classification destinations for modeling training. No manual transcription of the utterances is necessary. Experimental results show that this approach, which is much cheaper to implement, has achieved classification accuracy at the same level as the model trained with manual transcriptions