Noise-Resilient Training Method for Face Landmark Generation From Speech

IEEE/ACM Transactions on Audio, Speech, and Language Processing | , Vol 28: pp. 27-38

Visual cues such as lip movements, when available, play an important role in speech communication. They are especially helpful for the hearing impaired population or in noisy environments. When not available, having a system to automatically generate talking faces in sync with input speech would enhance speech communication and enable many novel applications. In this work, we present a new system that can generate 3D talking face landmarks from speech in an online fashion. We employ a neural network that accepts the raw waveform as an input. The network contains convolutional layers with 1D kernels and outputs the active shape model (ASM) coefficients of face landmarks. To promote smoother transitions between video frames, we present a variant of the model that has the same architecture but also accepts the previous frame’s ASM coefficients as an additional input. To cope with background noise, we propose a new training method to incorporate speech enhancement ideas at the feature level. Objective evaluations on landmark prediction show that the proposed system yields statistically significantly smaller errors than two state-of-the-art baseline methods on both a single-speaker dataset and a multi-speaker dataset. Experiments on noisy speech input with five types of non-stationary unseen noise show statistically significant improvements of the system performance thanks to the noise-resilient training method. Finally, subjective evaluations show that the generated talking faces have a significantly more convincing match with the input audio, achieving a similarly convincing level of realism as the ground-truth landmarks.