THEORY

A Simple Convergence Proof of Adam and Adagrad

November 30, 2022

Abstract

We provide a simple proof of convergence covering both the Adam and Adagrad adaptive optimization algorithms when applied to smooth (possibly non-convex) objective functions with bounded gradients. We show that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer, the dimension d, and the total number of iterations N. This bound can be made arbitrarily small, and with the right hyper-parameters, Adam can be shown to converge with the same rate of convergence O(d ln(N)/sqrt{N}). When used with the default parameters, Adam doesn't converge, however, and just like constant step-size SGD, it moves away from the initialization point faster than Adagrad, which might explain its practical success. Finally, we obtain the tightest dependency on the heavy ball momentum decay rate β_1 among all previous convergence bounds for non-convex Adam and Adagrad, improving from O((1-β_1)^{-3}) to O((1-β_1)^{-1}).

Download the Paper

AUTHORS

Written by

Alexandre Defossez

Leon Bottou

Nicolas Usunier

Francis Bach

Publisher

TMLR

Research Topics

Theory

Related Publications

May 01, 2023

THEORY

CORE MACHINE LEARNING

Meta-Learning in Games

Keegan Harris, Ioannis Anagnostides, Gabriele Farina, Mikhail Khodak, Zhiwei Steven Wu, Tuomas Sandholm, Maria-Florina Balcan

May 01, 2023

November 23, 2022

THEORY

CORE MACHINE LEARNING

Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy

Tal Hassner, Cuong N. Nguyen, Cuong V. Nguyen, Lam Si Tung Ho, Vu Dinh

November 23, 2022

November 08, 2022

THEORY

RESEARCH

Beyond neural scaling laws: beating power law scaling via data pruning

Ari Morcos, Shashank Shekhar, Surya Ganguli, Ben Sorscher, Robert Geirhos

November 08, 2022

December 06, 2021

THEORY

CORE MACHINE LEARNING

Learning on Random Balls is Sufficient for Estimating (Some) Graph Parameters

Takanori Maehara, Hoang NT

December 06, 2021

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.