Learning Performance of Prediction Markets with Kelly Bettors

https://arxiv.org/abs/1201.6655

In evaluating prediction markets (and other crowd-prediction mechanisms), investigators have repeatedly observed a so-called “wisdom of crowds” effect, which roughly says that the average of participants performs much better than the average participant. The market price—an average or at least aggregate of traders’ beliefs—offers a better estimate than most any individual trader’s opinion. In this paper, we ask a stronger question: how does the market price compare to the best trader’s belief, not just the average trader. We measure the market’s worst-case log regret, a notion common in machine learning theory. To arrive at a meaningful answer, we need to assume something about how traders behave. We suppose that every trader optimizes according to the Kelly criteria, a strategy that provably maximizes the compound growth of wealth over an (infinite) sequence of market interactions. We show several consequences. First, the market prediction is a wealth-weighted average of the individual participants’ beliefs. Second, the market learns at the optimal rate, the market price reacts exactly as if updating according to Bayes’ Law, and the market prediction has low worst-case log regret to the best individual participant. We simulate a sequence of markets where an underlying true probability exists, showing that the market converges to the true objective frequency as if updating a Beta distribution, as the theory predicts. If agents adopt a fractional Kelly criteria, a common practical variant, we show that agents behave like full-Kelly agents with beliefs weighted between their own and the market’s, and that the market price converges to a time-discounted frequency. Our analysis provides a new justification for fractional Kelly betting, a strategy widely used in practice for ad-hoc reasons. Finally, we propose a method for an agent to learn her own optimal Kelly fraction.