May 04, 2023
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations).
Written by
Nicklas Hansen
Yixin Lin
Hao Su
Xiaolong Wang
Vikash Kumar
Aravind Rajeswaran
Publisher
ICLR
January 06, 2024
Geng Ji, Wentao Jiang, Jiang Li, Fahmid Morshed Fahid, Zhengxing Chen, Yinghua Li, Jun Xiao, Chongxi Bao, Zheqing (Bill) Zhu
January 06, 2024
December 11, 2023
Dishank Bansal, Ricky Chen, Mustafa Mukadam, Brandon Amos
December 11, 2023
October 26, 2023
Daniel Jiang
October 26, 2023
October 12, 2023
Christopher Paxton, Jay Vakil, Priyam Parashar, Sam Powers, Xiaohan Zhang, Yonatan Bisk, Vidhi Jain
October 12, 2023
Product experiences
Foundational models
Product experiences
Latest news
Foundational models