RaCT: Toward Amortized Ranking-Critical Training For Collaborative Filtering

Eighth International Conference on Learning Representations (ICLR) |

PDF

We investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to more directly maximize ranking-based objective functions. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require re-running the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists.

We demonstrate the actor-critic’s ability to significantly improve the performance of a variety of prediction models, and achieve better or comparable performance to a variety of strong baselines on three large-scale datasets.

Publication Downloads

RaCT

April 17, 2020

This repository implements Ranking-Critical Training (RaCT) for Collaborative Filtering, accepted in International Conference on Learning Representations (ICLR), 2020. By using an actor-critic architecture to fine-tune a differentiable collaborative filtering model, we can improve the performance of a variety of MLE-based recommander functions, such as variational auto-encoders.