Baleen: ML Admission & Prefetching for Flash Caches

  • Daniel Lin-Kit Wong ,
  • Hao Wu ,
  • Carson Molder ,
  • S. Gunasekar ,
  • Jimmy Lu ,
  • Snehal Khandkar ,
  • Abhinav Sharma ,
  • ,
  • Nathan Beckmann ,
  • G. R. Ganger

USENIX FAST |

Organized by USENIX

Flash caches are used to reduce peak backend load for throughput-constrained data center services, reducing the total number of backend servers required. Bulk storage systems are a large-scale example, backed by high-capacity but low-throughput hard disks, and using flash caches to provide a more cost-effective storage layer underlying everything from blobstores to data warehouses.

However, flash caches must address the limited write endurance of flash by limiting the long-term average flash write rate to avoid premature wearout. To do so, most flash caches must use admission policies to filter cache insertions and maximize the workload-reduction value of each flash write.

The Baleen flash cache uses coordinated ML admission and prefetching to reduce peak backend load. After learning painful lessons with our early ML policy attempts, we exploit a new cache residency model (which we call episodes) to guide model training. We focus on optimizing for an end-toend system metric (Disk-head Time) that measures backend load more accurately than IO miss rate or byte miss rate.
Evaluation using Meta traces from seven storage clusters shows that Baleen reduces Peak Disk-head Time (and hence the number of backend hard disks required) by 12% over stateof-the-art policies for a fixed flash write rate constraint. BaleenTCO, which chooses an optimal flash write rate, reduces our estimated total cost of ownership (TCO) by 17%. Code and traces are available via https://www.pdl.cmu.edu/CILES/.