REINFORCEMENT LEARNING

Hierarchical Skills for Efficient Exploration

December 05, 2021

Abstract

In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration. However, prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design. In previous work on continuous control, the sensitivity of methods to this trade-off has not been addressed explicitly, as locomotion provides a suitable prior for navigation tasks, which have been of foremost interest. In this work, we analyze this trade-off for low-level policy pre-training with a new benchmark suite of diverse, sparse-reward tasks for bipedal robots. We alleviate the need for prior knowledge by proposing a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner. For utilization on downstream tasks, we present a three-layered hierarchical learning algorithm to automatically trade off between general and specific skills as required by the respective task. In our experiments, we show that our approach performs this trade-off effectively and achieves better results than current state-of-the-art methods for end-to-end hierarchical reinforcement learning and unsupervised skill discovery.

Download the Paper

AUTHORS

Written by

Jonas Gehring

Gabriel Synnaeve

andreas krause

Nicolas Usunier

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Related Publications

December 15, 2021

ROBOTICS

REINFORCEMENT LEARNING

Learning Accurate Long-term Dynamics for Model-based Reinforcement Learning

Roberto Calandra, Nathan Owen Lambert, Albert Wilcox, Howard Zhang, Kristofer S. J. Pister

December 15, 2021

December 05, 2021

REINFORCEMENT LEARNING

Local Differential Privacy for Regret Minimization in Reinforcement Learning

Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, Matteo Pirotta

December 05, 2021

November 12, 2021

THEORY

REINFORCEMENT LEARNING

Bandits with Knapsacks beyond the Worst-Case Analysis

Karthik Abinav Sankararaman, Aleksandrs Slivkins

November 12, 2021

November 09, 2021

REINFORCEMENT LEARNING

Interesting Object, Curious Agent: Learning Task-Agnostic Exploration

Simone Parisi, Victoria Dean, Deepak Pathak, Abhinav Gupta

November 09, 2021

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.