Asynchronous Gradient-Push

January 1, 2021


We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that the iterates at each agent converge to a neighborhood of the global minimum, where the neighborhood size depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Gradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.

Download the Paper


Written by

Mahmoud Assran

Michael Rabbat


IEEE Transactions on Automatic Control

Related Publications

Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA | Facebook AI Research

Many visual scenes contain text that carries crucial information, and it is thus essential to understand text in images for downstream reasoning tasks. For example, a deep water label on a warning sign warns people about the danger in the…

Ronghang Hu, Amanpreet Singh, Trevor Darrell, Marcus Rohrbach

Decoupling Representation and Classifier for Long-Tailed Recognition | Facebook AI Research

The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem.…

Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, Yannis Kalantidis

From Paris to Berlin: Discovering Fashion Style Influences Around the World | Facebook AI Research

The evolution of clothing styles and their migration across the world is intriguing, yet difficult to describe quantitatively.

Ziad Al-Halah, Kristen Grauman

Permutation Equivariant Models for Compositional Generalization in Language | Facebook AI Research

Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of…

Jonathan Gordon, David Lopez-Paz, Marco Baroni, Diane Bouchacourt