Balancing efficiency and fairness in heterogeneous GPU clusters for deep learning

Fifteenth European Conference on Computer Systems (EuroSys'20) |

Published by ACM

Related File

We present Gandiva_fair, a distributed, fair share scheduler that balances conflicting goals of efficiency and fairness in GPU clusters for deep learning training (DLT).  Gandiva_fair  provides performance isolation between users, enabling multiple users to share a single cluster, thus, maximizing cluster efficiency. Gandiva_fair  is the first scheduler that allocates cluster-wide GPU time fairly among active users.

Gandiva_fair achieves efficiency and fairness despite cluster heterogeneity. Data centers host a mix of GPU generations because of the rapid pace at which newer and faster GPUs are released. As the newer generations face higher demand from users, older GPU generations suffer poor utilization, thus reducing cluster efficiency. Gandiva_fair profiles the variable marginal utility across various jobs from newer GPUs, and transparently incentivizes users to older GPUs by a novel resource trading mechanism that maximizes cluster efficiency without affecting fairness guarantees of any user. With a prototype implementation and evaluation in a heterogeneous 200-GPU cluster, we show that Gandiva_fair achieves both fairness and efficiency under realistic multi-user workloads.