REINFORCEMENT LEARNING

NovelD: A Simple yet Effective Exploration Criterion

November 01, 2021

Abstract

Efficient exploration under sparse rewards remains a key challenge in deep reinforcement learning. Previous exploration methods (e.g., RND) have achieved strong results in multiple hard tasks. However, if there are multiple novel areas to explore, these methods often focus quickly on one without sufficiently trying others (like a depth-wise first search manner). In some scenarios (e.g., four corridor environment in Sec. 4.2), we observe they explore in one corridor for long and fail to cover all the states. On the other hand, in theoretical RL, with optimistic initialization and the inverse square root of visitation count as a bonus, it won’t suffer from this and explores different novel regions alternatively (like a breadth-first search manner). In this paper, inspired by this, we propose a simple but effective criterion called NovelD by weighting every novel area approximately equally. Our algorithm is very simple but yet shows comparable performance or even outperforms multiple SOTA exploration methods in many hard exploration tasks. Specifically, NovelD solves all the static procedurally-generated tasks in Mini-Grid with just 120M environment steps, without any curriculum learning. In comparison, the previous SOTA only solves 50% of them. NovelD also achieves SOTA on multiple tasks in NetHack, a rogue-like game that contains more challenging procedurally-generated environments. In multiple Atari games (e.g., MonteZuma’s Revenge, Venture, Gravitar), NovelD outperforms RND. We analyze NovelD thoroughly in MiniGrid and found that empirically it helps the agent explore the environment more uniformly with a focus on exploring beyond the boundary.

Download the Paper

AUTHORS

Written by

Tianjun Zhang

Huazhe Xu

Xiaolong Wang

Yi Wu

Kurt Keutzer

Joseph E. Gonzalez

Yuandong Tian

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Related Publications

December 05, 2021

REINFORCEMENT LEARNING

Local Differential Privacy for Regret Minimization in Reinforcement Learning

Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, Matteo Pirotta

December 05, 2021

December 05, 2021

REINFORCEMENT LEARNING

Hierarchical Skills for Efficient Exploration

Jonas Gehring, Gabriel Synnaeve, andreas krause, Nicolas Usunier

December 05, 2021

November 12, 2021

THEORY

REINFORCEMENT LEARNING

Bandits with Knapsacks beyond the Worst-Case Analysis

Karthik Abinav Sankararaman, Aleksandrs Slivkins

November 12, 2021

November 09, 2021

REINFORCEMENT LEARNING

Interesting Object, Curious Agent: Learning Task-Agnostic Exploration

Simone Parisi, Victoria Dean, Deepak Pathak, Abhinav Gupta

November 09, 2021

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.