Core Machine Learning

Theory

Near-Optimal Confidence Sequences for Bounded Random Variables

July 18, 2021

Abstract

Many inference problems, such as sequential decision problems like A/B testing, adaptive sampling schemes like bandit selection, are often on- line in nature. The fundamental problem for on- line inference is to provide a sequence of confidence intervals that are valid uniformly over the growing-into-infinity sample sizes. To address this question, we provide a near-optimal confidence sequence for bounded random variables by utilizing Bentkus’ concentration results. We show that it improves on the existing approaches that use the Cramer-Chernoff technique such as the Hoeffding, Bernstein, and Bennett inequalities. The resulting confidence sequence is confirmed to be favorable in synthetic coverage problems, adaptive stopping algorithms, and multi-armed bandit problems.

Download the Paper

AUTHORS

Written by

Arun Kumar Kuchibhotla

Qinqing Zheng

Publisher

ICML 2021

Research Topics

Core Machine Learning

Theory

Related Publications

November 08, 2022

Theory

Beyond neural scaling laws: beating power law scaling via data pruning

Ari Morcos, Shashank Shekhar, Surya Ganguli, Ben Sorscher, Robert Geirhos

November 08, 2022

November 30, 2020

Theory

Ranking & Recommendations

On ranking via sorting by estimated expected utility

Nicolas Usunier, Clément Calauzènes

November 30, 2020

November 30, 2020

Theory

Learning Optimal Representations with the Decodable Information Bottleneck

Rama Vedantam, David Schwab, Douwe Kiela, Yann Dubois

November 30, 2020

May 03, 2019

Theory

Fluctuation-dissipation relations for stochastic gradient descent

Sho Yaida

May 03, 2019

May 08, 2019

Theory

Fluctuation-dissipation relations for stochastic gradient descent | Facebook AI Research

Sho Yaida

May 08, 2019

March 12, 2018

Theory

Geometrical Insights for Implicit Generative Modeling | Facebook AI Research

Leon Bottou, Martin Arjovsky, David Lopez-Paz, Maxime Oquab

March 12, 2018

April 30, 2018

Theory

mixup: Beyond Empirical Risk Minimization | Facebook AI Research

Hongyi Zhang, Moustapha Cisse, Yann Dauphin, David Lopez-Paz

April 30, 2018

June 09, 2019

Theory

Manifold Mixup: Better Representations by Interpolating Hidden States | Facebook AI Research

Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio

June 09, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.