REINFORCEMENT LEARNING

Interesting Object, Curious Agent: Learning Task-Agnostic Exploration

November 09, 2021

Abstract

Common approaches for task-agnostic exploration learn tabula-rasa –the agent assumes isolated environments and no prior knowledge or experience. However, in the real world, agents learn in many environments and always come with prior experiences as they explore new ones. Exploration is a lifelong process. In this paper, we propose a paradigm change in the formulation and evaluation of task-agnostic exploration. In this setup, the agent first learns to explore across many environments without any extrinsic goal in a task-agnostic manner. Later on, the agent effectively transfers the learned exploration policy to better explore new environments when solving tasks. In this context, we evaluate several baseline exploration strategies and present a simple yet effective approach to learning taskagnostic exploration policies. Our key idea is that there are two components of exploration: (1) an agent-centric component encouraging exploration of unseen parts of the environment based on an agent’s belief; (2) an environment-centric component encouraging exploration of inherently interesting objects. We show that our formulation is effective and provides the most consistent exploration across several training-testing environment pairs. We also introduce benchmarks and metrics for evaluating task-agnostic exploration strategies. The source code is available at https://github.com/sparisi/cbet/.

Download the Paper

AUTHORS

Written by

Simone Parisi

Victoria Dean

Deepak Pathak

Abhinav Gupta

Publisher

NeurIPS

Research Topics

Reinforcement Learning

Related Publications

December 05, 2021

REINFORCEMENT LEARNING

Local Differential Privacy for Regret Minimization in Reinforcement Learning

Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, Matteo Pirotta

December 05, 2021

December 05, 2021

REINFORCEMENT LEARNING

Hierarchical Skills for Efficient Exploration

Jonas Gehring, Gabriel Synnaeve, andreas krause, Nicolas Usunier

December 05, 2021

November 12, 2021

THEORY

REINFORCEMENT LEARNING

Bandits with Knapsacks beyond the Worst-Case Analysis

Karthik Abinav Sankararaman, Aleksandrs Slivkins

November 12, 2021

November 02, 2021

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

Learning Search Space Partition for Path Planning

Kevin Yang, Tianjun Zhang, Chris Cummins, Brandon Cui, Benoit Steiner, Linnan Wang, Joseph E. Gonzalez, Dan Klein, Yuandong Tian

November 02, 2021

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.