GRAPHICS

COMPUTER VISION

Ego4D: Around the World in 3,000 Hours of Egocentric Video

October 14, 2021

Abstract

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite. It offers 3,025 hours of dailylife activity video spanning hundreds of scenarios (household, outdoor, workplace, leisure, etc.) captured by 855 unique camera wearers from 74 worldwide locations and 9 different countries. The approach to collection is designed to uphold rigorous privacy and ethics standards with consenting participants and robust de-identification procedures where relevant. Ego4D dramatically expands the volume of diverse egocentric video footage publicly available to the research community. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. Furthermore, we present a host of new benchmark challenges centered around understanding the first-person visual experience in the past (querying an episodic memory), present (analyzing hand-object manipulation, audio-visual conversation, and social interactions), and future (forecasting activities). By publicly sharing this massive annotated dataset and benchmark suite, we aim to push the frontier of first-person perception.
Project Page:https://ego4d-data.org/

Download the Paper

AUTHORS

Written by

Kristen Grauman

Andrew Westbury

Eugene Byrne

Zachary Chavis

Antonino Furnari

Rohit Girdhar

Jackson Hamburger

Hao Jiang

Miao Liu

Xingyu Liu

Miguel Martin

Tushar Nagarajan

Ilija Radosavovic

Santhosh Ramakrishnan

Fiona Ryan

Jayant Sharma

Michael Wray

Mengmeng Xu

Eric Zhongcong Xu

Chen Zhao

Siddhant Bansal

Dhruv Batra

Vincent Cartillier

Sean Crane

Tien Do

Akshay Erapall

Christoph Feichtenhofer

Adriano Fragomeni

Qichen Fu

Christian Fuegen

Abrham Gebreselasie

Cristina Gonzalez

James Hillis

Xuhua Huang

Yifei Huang

Wenqi Jia

Weslie Khoo

Jachym Kolar

Satwik Kottur

Anurag Kumar

Federico Landini

Chao Li

Zhenqiang Li

Karttikeya Mangalam

Raghava Modhugu

Jonathan Munro

Tullie Murrell

Takumi Nishiyasu

Will Price

Paola Ruiz Puentes

Merey Ramazanova

Leda Sari

Kiran Somasundaram

Audrey Southerland

Yusuke Sugano

Ruijie Tao

Minh Vo

Yuchen Wang

Xindi Wu

Takuma Yagi

Yunyi Zhu

Pablo Arbelaez

David Crandall

Dima Damen

Giovanni Maria Farinella

Bernard Ghanem

Vamsi Krishna Ithapu

C. V. Jawahar

Hanbyul Joo

Kris Kitani

Haizhou Li

Richard Newcombe

Aude Oliva

Hyun Soo Park

James M. Rehg

Yoichi Sato

Jianbo Shi

Mike Zheng Shou

Antonio Torralba

Lorenzo Torresani

Mingfei Yan

Jitendra Malik

Publisher

arXiv

Research Topics

Computer Vision

Graphics

Related Publications

June 17, 2019

COMPUTER VISION

Graph-Based Global Reasoning Networks | Facebook AI Research

Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but…

Yunpeng Chen, Marcus Rohrbach, Zhicheng Yan, Shuicheng Yan, Jiashi Feng, Yannis Kalantidis

June 17, 2019

June 17, 2019

COMPUTER VISION

DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition | Facebook AI Research

Motion has shown to be useful for video understanding, where motion is typically represented by optical flow. However, computing flow from video frames is very time-consuming. Recent works directly leverage the motion vectors and residuals…

Zheng Shou, Xudong Lin, Yannis Kalantidis, Laura Sevilla-Lara, Marcus Rohrbach, Shih-Fu Chang, Zhicheng Yan

June 17, 2019

June 18, 2019

COMPUTER VISION

Embodied Question Answering in Photorealistic Environments with Point Cloud Perception | Facebook AI Research

To help bridge the gap between internet vision-style problems and the goal of vision for embodied perception we instantiate a large-scale navigation task – Embodied Question Answering [1] in photo-realistic environments (Matterport 3D). We…

Erik Wijmans, Samyak Datta, Oleksandr Maksymets, Abhishek Das, Georgia Gkioxari, Stefan Lee, Irfan Essa, Devi Parikh, Dhruv Batra

June 18, 2019

June 11, 2019

NLP

COMPUTER VISION

Adversarial Inference for Multi-Sentence Video Description | Facebook AI Research

While significant progress has been made in the image captioning task, video description is still in its infancy due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the…

Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach

June 11, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.