Research

Computer Vision

DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames

April 26, 2020

Abstract

We present Decentralized Distributed Proximal Policy Optimization (DD-PPO), a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever ‘stale’), making it conceptually simple and easy to implement. In our experiments on training virtual robots to navigate in Habitat-Sim (Savva et al., 2019), DD-PPO exhibits near-linear scaling – achieving a speedup of 107x on 128 GPUs over a serial implementation. We leverage this scaling to train an agent for 2.5 Billion steps of experience (the equivalent of 80 years of human experience) – over 6 months of GPU-time training in under 3 days of wall-clock time with 64 GPUs.

This massive-scale training not only sets the state of art on Habitat Autonomous Navigation Challenge 2019, but essentially ‘solves’ the task – near-perfect autonomous navigation in an unseen environment without access to a map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously, error vs computation exhibits a power-law-like distribution; thus, 90% of peak performance is obtained relatively early (at 100 million steps) and relatively cheaply (under 1 day with 8 GPUs). Finally, we show that the scene understanding and navigation policies learned can be transferred to other navigation tasks – the analog of ‘ImageNet pre-training + task-specific fine-tuning’ for embodied AI. Our model outperforms ImageNet pre-trained CNNs on these transfer tasks and can serve as a universal resource (all models and code are publicly available).

Code: github.com/facebookresearch/habitat-api
Video: https://www.youtube.com/watch?v=5PBpV5i1v4

Download the Paper

AUTHORS

Written by

Erik Wijmans

Abhishek Kadian

Ari Morcos

Stefan Lee

Irfan Essa

Devi Parikh

Manolis Savva

Dhruv Batra

Research Areas

Computer Vision

Related Publications

November 10, 2022

Computer Vision

Learning State-Aware Visual Representations from Audible Interactions

Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado

November 10, 2022

November 06, 2022

Computer Vision

Neural Basis Models for Interpretability

Filip Radenovic, Abhimanyu Dubey, Dhruv Mahajan

November 06, 2022

October 25, 2022

Theseus: A Library for Differentiable Nonlinear Optimization

Mustafa Mukadam, Austin Wang, Brandon Amos, Daniel DeTone, Jing Dong, Joe Ortiz, Luis Pineda, Maurizio Monge, Ricky Chen, Shobha Venkataraman, Stuart Anderson, Taosha Fan, Paloma Sodhi

October 25, 2022

October 22, 2022

Computer Vision

Time-rEversed diffusioN tEnsor Transformer: A new TENET of Few-Shot Object Detection

Naila Murray, Lei Wang, Piotr Koniusz, Shan Zhang

October 22, 2022

April 30, 2018

Computer Vision

NAM – Unsupervised Cross-Domain Image Mapping without Cycles or GANs | Facebook AI Research

Yedid Hoshen, Lior Wolf

April 30, 2018

December 11, 2019

Speech & Audio

Computer Vision

Hyper-Graph-Network Decoders for Block Codes | Facebook AI Research

Eliya Nachmani, Lior Wolf

December 11, 2019

April 30, 2018

NLP

Speech & Audio

Identifying Analogies Across Domains | Facebook AI Research

Yedid Hoshen, Lior Wolf

April 30, 2018

November 01, 2018

NLP

Computer Vision

Non-Adversarial Unsupervised Word Translation | Facebook AI Research

Yedid Hoshen, Lior Wolf

November 01, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.