June 23, 2020
In this paper, we address the discovery of robotic options from demonstrations in an unsupervised manner. Specifically, we present a framework to jointly learn low-level control policies and higher-level policies of how to use them from demonstrations of a robot performing various tasks. By representing options as continuous latent variables, we frame the problem of learning these options as latent variable inference. We then present a temporal formulation of variational inference based on a temporal factorization of trajectory likelihoods, that allows us to infer options in an unsupervised manner. We demonstrate the ability of our framework to learn such options across three robotic demonstration datasets.
Written by
Tanmay Shankar
Abhinav Gupta
Publisher
ICML
Research Topics
Robotics
October 12, 2023
Christopher Paxton, Jay Vakil, Priyam Parashar, Sam Powers, Xiaohan Zhang, Yonatan Bisk, Vidhi Jain
October 12, 2023
July 10, 2023
Weiyu Liu, Yilun Du, Tucker Hermans, Sonia Chernova, Christopher Paxton
July 10, 2023
June 18, 2023
Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Dhruv Batra, Eric Undersander
June 18, 2023
May 04, 2023
Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran
May 04, 2023
Who We Are
Our Actions
Newsletter