Core Machine Learning

Theory

From Low Probability to High Confidence in Stochastic Convex Optimization

Feb 26, 2021

Abstract

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on “light-tail” noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.

Download the Paper

AUTHORS

Written by

Damek Davis

Dmitriy Drusvyatskiy

Lin Xiao

Junyu Zhang

Research Topics

Theory

Core Machine Learning

Related Publications

November 08, 2022

Theory

Beyond neural scaling laws: beating power law scaling via data pruning

Ari Morcos, Shashank Shekhar, Surya Ganguli, Ben Sorscher, Robert Geirhos

November 08, 2022

November 30, 2020

Theory

Ranking & Recommendations

On ranking via sorting by estimated expected utility

Nicolas Usunier, Clément Calauzènes

November 30, 2020

November 30, 2020

Theory

Learning Optimal Representations with the Decodable Information Bottleneck

Rama Vedantam, David Schwab, Douwe Kiela, Yann Dubois

November 30, 2020

May 03, 2019

Theory

Fluctuation-dissipation relations for stochastic gradient descent

Sho Yaida

May 03, 2019

May 08, 2019

Theory

Fluctuation-dissipation relations for stochastic gradient descent | Facebook AI Research

Sho Yaida

May 08, 2019

March 12, 2018

Theory

Geometrical Insights for Implicit Generative Modeling | Facebook AI Research

Leon Bottou, Martin Arjovsky, David Lopez-Paz, Maxime Oquab

March 12, 2018

April 30, 2018

Theory

mixup: Beyond Empirical Risk Minimization | Facebook AI Research

Hongyi Zhang, Moustapha Cisse, Yann Dauphin, David Lopez-Paz

April 30, 2018

June 09, 2019

Theory

Manifold Mixup: Better Representations by Interpolating Hidden States | Facebook AI Research

Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio

June 09, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.