May 04, 2023
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations).
Written by
Nicklas Hansen
Yixin Lin
Hao Su
Xiaolong Wang
Vikash Kumar
Aravind Rajeswaran
Publisher
ICLR
May 06, 2024
Haoyue Tang, Tian Xie
May 06, 2024
April 30, 2024
Mikayel Samvelyan, Minqi Jiang, Davide Paglieri, Jack Parker-Holder, Tim Rocktäschel
April 30, 2024
April 02, 2024
Patrick Lancaster, Nicklas Hansen, Aravind Rajeswaran, Vikash Kumar
April 02, 2024
March 26, 2024
Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang
March 26, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models