End-to-end subtitle detection and recognition for videos in East Asian languages via CNN ensemble

  • Yan Xu ,
  • Siyuan Shan ,
  • Ziming Qiu ,
  • Zhipeng Jia ,
  • Zhengyang Shen ,
  • Yipei Wang ,
  • Mengfei Shi ,
  • Eric Chang

Signal Processing: Image Communication | , Vol 60: pp. 131-143

In this paper, we propose an innovative end-to-end subtitle detection and recognition system for videos in East Asian languages. Our end-to-end system consists of multiple stages. Subtitles are firstly detected by a novel image operator based on the sequence information of consecutive video frames. Then, an ensemble of Convolutional Neural Networks (CNNs) trained on synthetic data is adopted for detecting and recognizing East Asian characters. Finally, a dynamic programming approach leveraging language models is applied to constitute results of the entire body of text lines. The proposed system achieves average end-to-end accuracies of 98.2% and 98.3% on 40 videos in Simplified Chinese and 40 videos in Traditional Chinese respectively, which is a significant outperformance of other existing methods. The near-perfect accuracy of our system dramatically narrows the gap between human cognitive ability and state-of-the-art algorithms used for such a task.