ROBOTICS

RESEARCH

Learning Robot Skills with Temporal Variational Inference

June 23, 2020

Abstract

In this paper, we address the discovery of robotic options from demonstrations in an unsupervised manner. Specifically, we present a framework to jointly learn low-level control policies and higher-level policies of how to use them from demonstrations of a robot performing various tasks. By representing options as continuous latent variables, we frame the problem of learning these options as latent variable inference. We then present a temporal formulation of variational inference based on a temporal factorization of trajectory likelihoods, that allows us to infer options in an unsupervised manner. We demonstrate the ability of our framework to learn such options across three robotic demonstration datasets.

Download the Paper

AUTHORS

Written by

Tanmay Shankar

Abhinav Gupta

Publisher

ICML

Research Topics

Robotics

Related Publications

October 12, 2023

ROBOTICS

SLAP: Spatial-Language Attention Policies

Christopher Paxton, Jay Vakil, Priyam Parashar, Sam Powers, Xiaohan Zhang, Yonatan Bisk, Vidhi Jain

October 12, 2023

July 10, 2023

ROBOTICS

NLP

StructDiffusion: Language-Guided Creation of Physically-Valid Structures using Unseen Objects

Weiyu Liu, Yilun Du, Tucker Hermans, Sonia Chernova, Christopher Paxton

July 10, 2023

June 18, 2023

ROBOTICS

REINFORCEMENT LEARNING

Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second

Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Dhruv Batra, Eric Undersander

June 18, 2023

May 04, 2023

ROBOTICS

REINFORCEMENT LEARNING

MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran

May 04, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.