Self-training with Weak Supervision

NAACL 2021 |

Published by NAACL 2021

State-of-the-art deep neural networks require large-scale labeled training data that is often expensive to obtain or not available for many tasks. Weak supervision in the form of domainspecific rules has been shown to be useful in such settings to automatically generate weakly labeled training data. However, learning with weak rules is challenging due to their inherent heuristic and noisy nature. An additional challenge is rule coverage and overlap, where prior work on weak supervision only considers instances that are covered by weak rules, thus leaving valuable unlabeled data behind. In this work, we develop a weak supervision framework (ASTRA) that leverages all the available data for a given task. To this end, we leverage task-specific unlabeled data through self-training with a model (student) that considers contextualized representations and predicts pseudo-labels for instances that may not be covered by weak rules. We further develop a rule attention network (teacher) that learns how to aggregate student pseudo-labels with weak rule labels, conditioned on their fidelity and the underlying context of an instance. Finally, we construct a semi-supervised learning objective for end-to-end training with unlabeled data, domain-specific rules, and a small amount of labeled data. Extensive experiments on six benchmark datasets for text classification demonstrate the effectiveness of our approach with significant improvements over state-of-the-art baselines.

Publication Downloads

Self-training with Weak Supervision [Code]

April 27, 2021

State-of-the-art deep neural networks require large-scale labeled training data that is often either expensive to obtain or not available for many tasks. Weak supervision in the form of domain-specific rules has been shown to be useful in such settings to automatically generate weakly labeled data for learning. However, learning with weak rules is challenging due to their inherent heuristic and noisy nature. An additional challenge is rule coverage and overlap, where prior work on weak supervision only considers instances to which domain-specific rules apply. In contrast, we develop a weak supervision framework WST that leverages all available data for a given task. To this end, we leverage task-specific unlabeled data that allows us to harness contextualized representations for instances where weak rules do not apply. In order to integrate this knowledge with domain-specific heuristic rules, we develop a rule attention network that learns how to aggregate them conditioned on their fidelity and the underlying context of an instance. Finally, we develop a semi-supervised learning objective for training this framework with small labeled data, domain-specific rules, and unlabeled data. Extensive experiments on six benchmark datasets demonstrate the effectiveness of our approach with significant improvements over state-of-the-art baselines.