Online Classification Using a Voted RDA Method

Proceedings of the AAAI Conference on Artificial Intelligence |

Published by AAAI

We propose a voted dual averaging method for online classification problems with explicit regularization. This method employs the update rule of the regularized dual averaging (RDA) method proposed by Xiao, but only on the subsequence of training examples where a classification error is made. We derive a bound on the number of mistakes made by this method on the training set, as well as its generalization error rate. We also introduce the concept of relative strength of regularization, and show how it affects the mistake bound and generalization performance. We examine the method using L1-regularization on a large-scale natural language processing task, and obtained state-of-the-art classification performance with fairly sparse models.