RESEARCH

COMPUTER VISION

Ego-Topo: Environment Affordances from Egocentric Video

April 01, 2020

Abstract

First-person video naturally brings the use of a physical environment to the forefront, since it shows the camera wearer interacting fluidly in a space based on his intentions. However, current methods largely separate the observed actions from the persistent space itself. We introduce a model for environment affordances that is learned directly from egocentric video. The main idea is to gain a human-centric model of a physical space (such as a kitchen) that captures (1) the primary spatial zones of interaction and (2) the likely activities they support. Our approach decomposes a space into a topological map derived from first-person activity, organizing an ego-video into a series of visits to the different zones. Further, we show how to link zones across multiple related environments (e.g., from videos of multiple kitchens) to obtain a consolidated representation of environment functionality. On EPIC-Kitchens and EGTEA+, we demonstrate our approach for learning scene affordances and anticipating future actions in long-form video. Project page: http://vision.cs.utexas.edu/projects/ego-topo/

Download the Paper

AUTHORS

Publisher

CVPR

Research Topics

Computer Vision

Related Publications

December 15, 2021

RESEARCH

Sample-and-threshold differential privacy: Histograms and applications

Akash Bharadwaj, Graham Cormode

December 15, 2021

December 06, 2021

COMPUTER VISION

CORE MACHINE LEARNING

Debugging the Internals of Convolutional Networks

Bilal Alsallakh, Narine Kokhlikyan, Vivek Miglani, Shubham Muttepawar, Edward Wang (AI Infra), Sara Zhang, David Adkins, Orion Reblitz-Richardson

December 06, 2021

December 06, 2021

COMPUTER VISION

Early Convolutions Help Transformers See Better

Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollar, Ross Girshick

December 06, 2021

November 09, 2021

COMPUTER VISION

CORE MACHINE LEARNING

Grounding inductive biases in natural images: invariance stems from variations in data

Diane Bouchacourt, Mark Ibrahim, Ari Morcos

November 09, 2021