ROBOTICS

REINFORCEMENT LEARNING

MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

May 04, 2023

Abstract

Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations).

Download the Paper

AUTHORS

Written by

Nicklas Hansen

Yixin Lin

Hao Su

Xiaolong Wang

Vikash Kumar

Aravind Rajeswaran

Publisher

ICLR

Research Topics

Reinforcement Learning

Robotics

Core Machine Learning

Related Publications

January 06, 2024

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Learning to bid and rank together in recommendation systems

Geng Ji, Wentao Jiang, Jiang Li, Fahmid Morshed Fahid, Zhengxing Chen, Yinghua Li, Jun Xiao, Chongxi Bao, Zheqing (Bill) Zhu

January 06, 2024

December 11, 2023

REINFORCEMENT LEARNING

CORE MACHINE LEARNING

TaskMet: Task-driven Metric Learning for Model Learning

Dishank Bansal, Ricky Chen, Mustafa Mukadam, Brandon Amos

December 11, 2023

October 26, 2023

REINFORCEMENT LEARNING

Dynamic Subgoal-based Exploration via Bayesian Optimization

Daniel Jiang

October 26, 2023

October 12, 2023

ROBOTICS

SLAP: Spatial-Language Attention Policies

Christopher Paxton, Jay Vakil, Priyam Parashar, Sam Powers, Xiaohan Zhang, Yonatan Bisk, Vidhi Jain

October 12, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.