Representation Learning for Event-based Visuomotor Policies

  • Sai Vemprala ,
  • Sami Mian ,
  • Ashish Kapoor

NeurIPS 2021 |

PDF

Event-based cameras are dynamic vision sensors that can provide asynchronous measurements of changes in per-pixel brightness at a microsecond level. This makes them significantly faster than conventional frame-based cameras, and an appealing choice for high-speed navigation. While an interesting sensor modality, this asynchronous data poses a challenge for common machine learning techniques. In this paper, we present an event variational autoencoder for unsupervised representation learning from asynchronous event camera data. We show that it is feasible to learn compact representations from spatiotemporal event data to encode the context. Furthermore, we show that such pretrained representations can be beneficial for navigation, allowing for usage in reinforcement learning instead of end-to-end reward driven perception. We validate this framework of learning visuomotor policies by applying it to an obstacle avoidance scenario in simulation. We show that representations learnt from event data enable training fast control policies that can adapt to different control capacities, and demonstrate a higher degree of robustness than end-to-end learning from event images.