RESEARCH

SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum

February 25, 2020

Abstract

Distributed optimization is essential for training large models on large datasets. Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods (e.g., using gossip algorithms) to decouple communications among workers. Although these methods run faster than AllReduce-based methods, which use blocking communication before every update, the resulting models may be less accurate after the same number of updates. Inspired by the BMUF method of Chen & Huo (2016), we propose a slow momentum (SlowMo) framework, where workers periodically synchronize and perform a momentum update, after multiple iterations of a base optimization algorithm. Experiments on image classification and machine translation tasks demonstrate that SlowMo consistently yields improvements in optimization and generalization performance relative to the base optimizer, even when the additional overhead is amortized over many updates so that the SlowMo runtime is on par with that of the base optimizer. We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses. Since BMUF can be expressed through the SlowMo framework, our results also correspond to the first theoretical convergence guarantees for BMUF.

Download the Paper

AUTHORS

Written by

Mike Rabbat

Nicolas Ballas

Vinayak Tantia

Jianyu Wang

Publisher

ICLR

Related Publications

December 15, 2021

RESEARCH

Sample-and-threshold differential privacy: Histograms and applications

Akash Bharadwaj, Graham Cormode

December 15, 2021

January 09, 2021

RESEARCH

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

Jean Tarbouriech, Alessandro Lazaric, Matteo Pirotta, Michal Valko

January 09, 2021

October 19, 2020

RESEARCH

SPEECH & AUDIO

Unsupervised Translation of Programming Languages

Baptiste Rozière, Marie-Anne Lachaux, Lowik Chanussot, Guillaume Lample

October 19, 2020

February 25, 2020

RESEARCH

Lookahead converges to stationary points of smooth non-convex functions

Mike Rabbat, Jianyu Wang, Nicolas Ballas, Vinayak Tantia

February 25, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.