ML APPLICATIONS

RESEARCH

On the Convergence of Nesterov’s Accelerated Gradient Method in Stochastic Settings

August 14, 2020

Abstract

We study Nesterov’s accelerated gradient method with constant step-size and momentum parameters in the stochastic approximation setting (unbiased gradients with bounded variance) and the finite-sum setting (where randomness is due to sampling mini-batches). To build better insight into the behavior of Nesterov’s method in stochastic settings, we focus throughout on objectives that are smooth, strongly-convex, and twice continuously differentiable. In the stochastic approximation setting, Nesterov’s method converges to a neighborhood of the optimal point at the same accelerated rate as in the deterministic setting. Perhaps surprisingly, in the finite-sum setting, we prove that Nesterov’s method may diverge with the usual choice of step-size and momentum, unless additional conditions on the problem related to conditioning and data coherence are satisfied. Our results shed light as to why Nesterov’s method may fail to converge or achieve acceleration in the finite-sum setting.

Download the Paper

AUTHORS

Written by

Mido Assran

Michael Rabbat

Publisher

International Conference on Machine Learning (ICML)

Research Topics

Machine Learning

Related Publications

June 14, 2020

Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA | Facebook AI Research

Many visual scenes contain text that carries crucial information, and it is thus essential to understand text in images for downstream reasoning tasks. For example, a deep water label on a warning sign warns people about the danger in the…

Ronghang Hu, Amanpreet Singh, Trevor Darrell, Marcus Rohrbach

June 14, 2020

April 25, 2020

Permutation Equivariant Models for Compositional Generalization in Language | Facebook AI Research

Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of…

Jonathan Gordon, David Lopez-Paz, Marco Baroni, Diane Bouchacourt

April 25, 2020

September 15, 2019

SPEECH & AUDIO

Who Needs Words? Lexicon-Free Speech Recognition | Facebook AI Research

Lexicon-free speech recognition naturally deals with the problem of out-of-vocabulary (OOV) words. In this paper, we show that character-based language models (LM) can perform as well as word-based LMs for speech recognition, in word error…

Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert

September 15, 2019

September 10, 2019

NLP

Bridging the Gap Between Relevance Matching and Semantic Matching for Short Text Similarity Modeling | Facebook AI Research

A core problem of information retrieval (IR) is relevance matching, which is to rank documents by relevance to a user’s query. On the other hand, many NLP problems, such as question answering and paraphrase identification, can be considered…

Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin

September 10, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.