RESEARCH

Neural Dynamic Policies for End-to-End Sensorimotor Learning

December 6, 2020

Abstract

The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces such as torque, joint angle, or end-effector position. This forces the agent to make decision at each point in training, and hence, limit the scalability to continuous, high-dimensional, and long-horizon tasks. In contrast, research in classical robotics has, for a long time, exploited dynamical systems as a policy representation to learn robot behaviors via demonstrations. These techniques, however, lack the flexibility and generalizability provided by deep learning or deep reinforcement learning and have remained under-explored in such settings. In this work, we begin to close this gap and embed dynamics structure into deep neural network-based policies by reparameterizing action spaces with differential equations. We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space as opposed to prior policy learning methods where action represents the raw control space. The embedded structure allow us to perform end-to-end policy learning under both reinforcement and imitation learning setups. We show that NDPs achieve better or comparable performance to state-of-the-art approaches on many robotic control tasks using both reward-based training and demonstrations. Project video and code are available at: https://shikharbahl.github.io/ neural-dynamic-policies/

Download the Paper

AUTHORS

Written by

Shikhar Bahl

Mustafa Mukadam

Abhinav Gupta

Deepak Pathak

Research Topics

Robotics

Reinforcement Learning

Related Publications

May 17, 2019

COMPUTER VISION

SPEECH & AUDIO

GLoMo: Unsupervised Learning of Transferable Relational Graphs | Facebook AI Research

Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However,…

Zhilin Yang, Jake (Junbo) Zhao, Bhuwan Dhingra, Kaiming He, William W. Cohen, Ruslan Salakhutdinov, Yann LeCun

May 17, 2019

May 06, 2019

COMPUTER VISION

NLP

No Training Required: Exploring Random Encoders for Sentence Classification | Facebook AI Research

We explore various methods for computing sentence representations from pre-trained word embeddings without any training, i.e., using nothing but random parameterizations. Our aim is to put sentence embeddings on more solid footing by 1) looking…

John Wieting, Douwe Kiela

May 06, 2019

May 06, 2019

NLP

COMPUTER VISION

Efficient Lifelong Learning with A-GEM | Facebook AI Research

In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong…

Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, Mohamed Elhoseiny

May 06, 2019

May 06, 2019

COMPUTER VISION

Learning Exploration Policies for Navigation | Facebook AI Research

Numerous past works have tackled the problem of task-driven navigation. But, how to effectively explore a new environment to enable a variety of down-stream tasks has received much less attention. In this work, we study how agents can…

Tao Chen, Saurabh Gupta, Abhinav Gupta

May 06, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.