The MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors

  • William H Guss ,
  • Cayden Codel ,
  • ,
  • Brandon Houghton ,
  • ,
  • Stephanie Milani ,
  • Sharada Mohanty ,
  • Diego Perez Liebana ,
  • Ruslan Salakhutdinov ,
  • Nicholay Topin ,
  • Manuela Veloso ,
  • Philip Wang

Thirty-third Conference on Neural Information Processing Systems (NeurIPS) Competition track |

Related File

Though deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples. As state-of-the-art reinforcement learning (RL) systems require an exponentially increasing number of samples, their development is restricted to a continually shrinking segment of the AI community. Likewise, many of these systems cannot be applied to real-world problems, where environment samples are expensive. Resolution of these limitations requires new, sample-efficient methods. To facilitate research in this direction, we introduce the MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors.
The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations to drastically reduce the number of samples needed to solve complex, hierarchical, and sparse environments. To that end, we introduce:(1) the Minecraft ObtainDiamond task, a sequential decision making environment requiring long-term planning, hierarchical control, and efficient exploration methods; and (2) the MineRL-v0 dataset, a large-scale collection of over 60 million state-action pairs of human demonstrations that can be resimulated into embodied trajectories with arbitrary modifications to game state and visuals.