ML APPLICATIONS

Sketched Newton-Raphson

July 10, 2020

Abstract

We propose a new globally convergent stochastic second order method. Our starting point is the development of a new Sketched Newton-Raphson (SNR) method for solving large scale nonlinear equations of the form F(x)=0 with F: R^d -> R^d. We then show how to design several stochastic second order optimization methods by re-writing the optimization problem of interest as a system of nonlinear equations and applying SNR. For instance, by applying SNR to find a stationary point of a generalized linear model (GLM), we derive completely new and scalable stochastic second order methods. We show that the resulting method is very competitive as compared to state-of-the-art variance reduced methods. Using a variable splitting trick, we also show that the Stochastic Newton method (SNM) is a special case of SNR, and use this connection to establish the first global convergence theory of SNM. Indeed, by showing that SNR can be interpreted as a variant of the stochastic gradient descent (SGD) method we are able to leverage proof techniques of SGD and establish a global convergence theory and rates of convergence for SNR. As a special case, our theory also provides a new global convergence theory for the original Newton-Raphson method under strictly weaker assumptions as compared to what is commonly used for global convergence. There are many ways to re-write an optimization problem as nonlinear equations. Each re-write would lead to a distinct method when using SNR. As such, we believe that SNR and its global convergence theory will open the way to designing and analysing a host of new stochastic second order methods.

Download the Paper

AUTHORS

Written by

Rui Yuan

Alessandro Lazaric

Robert Gower

Publisher

ICML Workshop

Related Publications

November 28, 2022

RESEARCH

CORE MACHINE LEARNING

Neural Attentive Circuits

Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio

November 28, 2022

November 23, 2022

THEORY

CORE MACHINE LEARNING

Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy

Tal Hassner, Cuong N. Nguyen, Cuong V. Nguyen, Lam Si Tung Ho, Vu Dinh

November 23, 2022

November 16, 2022

RESEARCH

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

November 10, 2022

RESEARCH

COMPUTER VISION

Learning State-Aware Visual Representations from Audible Interactions

Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado

November 10, 2022

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.