REINFORCEMENT LEARNING

Dynamic Subgoal-based Exploration via Bayesian Optimization

October 26, 2023

Abstract

Reinforcement learning in sparse-reward navigation environments with expensive and limited interactions is challenging and poses a need for effective exploration. Motivated by complex navigation tasks that require real-world training (when cheap simulators are not available), we consider an agent that faces an unknown distribution of environments and must decide on an exploration strategy. It may leverage a series of training environments to improve its policy before it is evaluated in a test environment drawn from the same environment distribution. Most existing approaches focus on fixed exploration strategies, while the few that view exploration as a meta-optimization problem tend to ignore the need for cost-efficient exploration. We propose a cost-aware Bayesian optimization approach that efficiently searches over a class of dynamic subgoal-based exploration strategies. The algorithm adjusts a variety of levers — the locations of the subgoals, the length of each episode, and the number of replications per trial — in order to overcome the challenges of sparse rewards, expensive interactions, and noise. An experimental evaluation demonstrates that the new approach outperforms existing baselines across a number of problem domains. We also provide a theoretical foundation and prove that the method asymptotically identifies a near-optimal subgoal design.

Download the Paper

AUTHORS

Written by

Yijia Wang

Matthias Poloczek

Daniel Jiang

Publisher

Transactions on Machine Learning Research

Research Topics

Reinforcement Learning

Related Publications

April 30, 2024

REINFORCEMENT LEARNING

Multi-Agent Diagnostics for Robustness via Illuminated Diversity

Mikayel Samvelyan, Minqi Jiang, Davide Paglieri, Jack Parker-Holder, Tim Rocktäschel

April 30, 2024

January 06, 2024

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Learning to bid and rank together in recommendation systems

Geng Ji, Wentao Jiang, Jiang Li, Fahmid Morshed Fahid, Zhengxing Chen, Yinghua Li, Jun Xiao, Chongxi Bao, Zheqing (Bill) Zhu

January 06, 2024

December 11, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

TaskMet: Task-driven Metric Learning for Model Learning

Dishank Bansal, Ricky Chen, Mustafa Mukadam, Brandon Amos

December 11, 2023

December 10, 2023

REINFORCEMENT LEARNING

Weakly Coupled Deep Q-Networks

Ibrahim El Shar, Daniel Jiang

December 10, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.