REINFORCEMENT LEARNING

Hierarchical Skills for Efficient Exploration

December 05, 2021

Abstract

In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration. However, prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design. In previous work on continuous control, the sensitivity of methods to this trade-off has not been addressed explicitly, as locomotion provides a suitable prior for navigation tasks, which have been of foremost interest. In this work, we analyze this trade-off for low-level policy pre-training with a new benchmark suite of diverse, sparse-reward tasks for bipedal robots. We alleviate the need for prior knowledge by proposing a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner. For utilization on downstream tasks, we present a three-layered hierarchical learning algorithm to automatically trade off between general and specific skills as required by the respective task. In our experiments, we show that our approach performs this trade-off effectively and achieves better results than current state-of-the-art methods for end-to-end hierarchical reinforcement learning and unsupervised skill discovery.

Download the Paper

AUTHORS

Written by

Jonas Gehring

Gabriel Synnaeve

andreas krause

Nicolas Usunier

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Related Publications

January 06, 2024

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Learning to bid and rank together in recommendation systems

Geng Ji, Wentao Jiang, Jiang Li, Fahmid Morshed Fahid, Zhengxing Chen, Yinghua Li, Jun Xiao, Chongxi Bao, Zheqing (Bill) Zhu

January 06, 2024

December 11, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

TaskMet: Task-driven Metric Learning for Model Learning

Dishank Bansal, Ricky Chen, Mustafa Mukadam, Brandon Amos

December 11, 2023

October 26, 2023

REINFORCEMENT LEARNING

Dynamic Subgoal-based Exploration via Bayesian Optimization

Daniel Jiang

October 26, 2023

October 01, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

Q-Pensieve: Boosting Sample Efficiency of Multi-Objective RL Through Memory Sharing of Q-Snapshots

Wei Hung, Bo-Kai Huang, Ping-Chun Hsieh, Xi Liu

October 01, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.