Abstract State Transition Graphs for Model-Based Reinforcement Learning

2018 Brazilian Conference on Intelligent Systems |

Published by IEEE

Publication

Skill acquisition methods for Reinforcement Learning (RL) are focused on solving problems by breaking them into smaller sub-problems, allowing the learning agent to reuse tasks for other similar problems. Many of these skill acquisition methods use a State Transition Graph (STG). Nevertheless, the problem is that STGs are only available for simple RL problems, given that, for complex problems, the resulting STG becomes too large to be handled in practice. In this paper, we propose a method for creating Abstract State Transition Graphs (ASTGs) that fuse structurally similar states into a single abstract state. We show that an ASTG is capable of: (i) efficiently identifying similar states; (ii) greatly reducing the number of states of a STG; and (iii) detecting temporal features, thus enabling the differentiation of states based on their predecessors. This allows the ASTG to be (i) more accurate, since it succeeds at creating abstract states by merging similar states with similar previous steps; as well as (ii) manageable with respect to its size.