March 29, 2023
We present the largest and most comprehensive empirical study of pre-trained visual representations (PVRs) or visual ‘foundation models’ for Embodied AI. First, we curate CortexBench, consisting of 17 different tasks spanning locomotion, navigation, dexterous, and mobile manipulation. Next, we systematically evaluate existing PVRs and find that none is universally dominant. To study the effect of pre-training data scale and diversity, we combine over 4,000 hours of egocentric videos from 7 different sources (over 5.6M images) and ImageNet to train different-sized vision transformers using Masked Auto-Encoding (MAE) on slices of this data. Contrary to inferences from prior work, we find that scaling dataset size and diversity does not improve performance universally (but does so on average). Our largest model, named VC-1, outperforms all prior PVRs on average but does not universally dominate either. Finally, we show that task- or domain-specific adaptation of VC-1 leads to substantial gains, with VC-1 (adapted) achieving competitive or superior performance than the best known results on all of the benchmarks in CortexBench. These models required over 10,000 GPU-hours to train and can be found on our website for the benefit of the research community.
Written by
Franziska Meier
Aravind Rajeswaran
Jitendra Malik
Karmesh Yadav
Oleksandr Maksymets
Sergio Arnaud
Sneha Silwal
Vincent-Pierre Berges
Aryan Jain
Claire Chen
Jason Ma
Yixin Lin
Publisher
Arxiv
Research Topics
Robotics
October 12, 2023
Christopher Paxton, Jay Vakil, Priyam Parashar, Sam Powers, Xiaohan Zhang, Yonatan Bisk, Vidhi Jain
October 12, 2023
July 10, 2023
Weiyu Liu, Yilun Du, Tucker Hermans, Sonia Chernova, Christopher Paxton
July 10, 2023
June 18, 2023
Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Dhruv Batra, Eric Undersander
June 18, 2023
May 04, 2023
Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran
May 04, 2023
Who We Are
Our Actions
Newsletter