REINFORCEMENT LEARNING

An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits

November 30, 2020

Abstract

In the contextual linear bandit setting, algorithms built on the optimism principle fail to exploit the structure of the problem and have been shown to be asymptotically suboptimal. In this paper, we follow recent approaches of deriving asymptotically optimal algorithms from problem-dependent regret lower bounds and we introduce a novel algorithm improving over the state-of-the-art along multiple dimensions. We build on a reformulation of the lower bound, where context distribution and exploration policy are decoupled, and we obtain an algorithm robust to unbalanced context distributions. Then, using an incremental primal-dual approach to solve the Lagrangian relaxation of the lower bound, we obtain a scalable and computationally efficient algorithm. Finally, we remove forced exploration and build on confidence intervals of the optimization problem to encourage a minimum level of exploration that is better adapted to the problem structure. We demonstrate the asymptotic optimality of our algorithm, while providing both problem-dependent and worst-case finite-time regret guarantees. Our bounds scale with the logarithm of the number of arms, thus avoiding the linear dependence common in all related prior works. Notably, we establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits. Finally, we verify that our algorithm obtains better empirical performance than state-of-the-art baselines.

Download the Paper

AUTHORS

Written by

Matteo Pirotta

Alessandro Lazaric

Andrea TIRINZONI

Marcello RESTELLI

Publisher

NEURIPS

Research Topics

Reinforcement Learning

Related Publications

January 06, 2024

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Learning to bid and rank together in recommendation systems

Geng Ji, Wentao Jiang, Jiang Li, Fahmid Morshed Fahid, Zhengxing Chen, Yinghua Li, Jun Xiao, Chongxi Bao, Zheqing (Bill) Zhu

January 06, 2024

December 11, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

TaskMet: Task-driven Metric Learning for Model Learning

Dishank Bansal, Ricky Chen, Mustafa Mukadam, Brandon Amos

December 11, 2023

October 01, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

Q-Pensieve: Boosting Sample Efficiency of Multi-Objective RL Through Memory Sharing of Q-Snapshots

Wei Hung, Bo-Kai Huang, Ping-Chun Hsieh, Xi Liu

October 01, 2023

September 12, 2023

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Optimizing Long-term Value for Auction-Based Recommender Systems via On-Policy Reinforcement Learning

Bill Zhu, Alex Nikulkov, Dmytro Korenkevych, Fan Liu, Jalaj Bhandari, Ruiyang Xu, Urun Dogan

September 12, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.