Shiftry: RNN Inference in 2KB of RAM

OOPSLA |

Organized by ACM

Traditionally, IoT devices send collected sensor data to an intelligent cloud where machine learning (ML)
inference happens. However, this course is rapidly changing and there is a recent trend to run ML on the edge
IoT devices themselves. An intelligent edge is attractive because it saves network round trip (efficiency) and
keeps user data at the source (privacy). However, the IoT devices are much more resource constrained than
the cloud, which makes running ML on them challenging. Specifically, consider Arduino Uno, a commonly
used board, that has 2KB of RAM and 32KB of read-only Flash memory. Although recent breakthroughs in ML
have created novel recurrent neural network (RNN) models that provide good accuracy with KB-sized models,
deploying them on tiny devices with such hard memory requirements has remained elusive.

We provide, Shiftry, an automatic compiler from high-level floating-point ML models to fixed-point
C-programs with 8-bit and 16-bit integers, which have significantly lower memory requirements. For this
conversion, Shiftry uses a data-driven float-to-fixed procedure and a RAM management mechanism. These
techniques enable us to provide first empirical evaluation of RNNs running on tiny edge devices. On simpler
ML models that prior work could handle, Shiftry-generated code has lower latency and higher accuracy.