Efficient Abstraction Selection in Reinforcement Learning – Extended Abstract

  • Harm van Seijen ,
  • Shimon Whiteson ,
  • L.J.H.M. Kester

SARA'13 |

Publication

This paper introduces a novel approach for abstraction selection in reinforcement learning problems modelled as factored Markov decision processes (MDPs), for which a state is described via a set of state components. In abstraction selection, an agent must choose an abstraction from a set of candidate abstractions, each build up from a different combination of state components.