Time-Constrained Learning

Pattern Recognit. | , Vol 142: pp. 109672

Publication | Publication

Consider a scenario in which we have a huge labeled dataset and a limited time to train a given learner using. Since we may not be able to use the whole dataset, how should we proceed?

We propose TCT, an algorithm for this task, whose design relies on principles from Machine Teaching. We present an experimental study involving 5 different learners and 20 datasets where we show that TCT consistently outperforms alternative teaching/training methods, namely: (1) Training over batches of random samples, until the time limit is reached; (2) The state-of-the-art Machine Teaching algorithm for black-box learners proposed in [Dasgupta et al., ICML 19], and (3) Stochastic Gradient Descent (when applicable). While our work is primarily practical, we also show that a stripped-down version of TCT has provable guarantees.