Optimizing group-fair Plackett-Luce ranking models for relevance and ex-post fairness
- Sruthi Gorantla ,
- Eshaan Bhansali ,
- Amit Deshpande ,
- Anand Louis
NeurIPS workshop OPT2023: Optimization for Machine Learning |
Organized by Microsoft
In learning-to-rank (LTR), optimizing only the relevance (or the expected ranking utility) can cause representational harm to certain categories of items. We propose a novel objective that maximizes expected relevance only over those rankings that satisfy given representation constraints to ensure ex-post fairness.Building upon recent work on an efficient sampler for ex-post group-fair rankings, we propose a group-fair Plackett-Luce model and show that it can be efficiently optimized for our objective in the LTR framework. Experiments on three real-world datasets show that our algorithm guarantees fairness alongside usually having better relevance compared to the LTR baselines. In addition, our algorithm also achieves better relevance than post-processing baselines which also ensure ex-post fairness. Further, when implicit bias is injected into the training data, our algorithm typically outperforms existing LTR baselines in relevance.