Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark

  • Sharada Mohanty ,
  • Jyotish Poonganam ,
  • Adrien Gaidon ,
  • ,
  • Blake Wulfe ,
  • Dipam Chakraborty ,
  • Gražvydas Šemetulskis ,
  • João Schapke ,
  • Jonas Kubilius ,
  • Jurgis Pašukonis ,
  • Linas Klimas ,
  • Matthew Hausknecht ,
  • Patrick MacAlpine ,
  • Quang Nhat Tran ,
  • Thomas Tumiel ,
  • Xiaocheng Tang ,
  • Xinwei Chen ,
  • Christopher Hesse ,
  • Jacob Hilton ,
  • William Hebgen Guss ,
  • Sahika Genc ,
  • John Schulman ,
  • Karl Cobbe

NeurIPS 2020 Competition Track |

The NeurIPS 2020 Procgen Competition was designed as a centralized benchmark with clearly defined tasks for measuring Sample Efficiency and Generalization in Reinforcement Learning. Generalization remains one of the most fundamental challenges in deep reinforcement learning, and yet we do not have enough benchmarks to measure the progress of the community on Generalization in Reinforcement Learning. We present the design of a centralized benchmark for Reinforcement Learning which can help measure Sample Efficiency and Generalization in Reinforcement Learning by doing end to end evaluation of the training and rollout phases of thousands of user submitted code bases in a scalable way. We designed the benchmark on top of the already existing Procgen Benchmark by defining clear tasks and standardizing the end to end evaluation setups. The design aims to maximize the flexibility available for researchers who wish to design future iterations of such benchmarks, and yet imposes necessary practical constraints to allow for a system like this to scale. This paper presents the competition setup and the details and analysis of the top solutions identified through this setup in context of 2020 iteration of the competition at NeurIPS.