Scaling Distributed Machine Learning with In-Network Aggregation

  • Amedeo Sapio ,
  • Marco Canini ,
  • Chen-Yu Ho ,
  • ,
  • Panos Kalnis ,
  • Changhoon Kim ,
  • Arvind Krishnamurthy ,
  • Masoud Moshref ,
  • ,
  • Peter Richtarik

NSDI 2021 |

Organized by USENIX

Related File

Training machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide an efficient solution that speeds up training by up to 5.5x for a number of real-world benchmark models.