RESEARCH

COMPUTER VISION

EGO-TOPO: Environment Affordances from Egocentric Video

June 14, 2020

Abstract

First-person video naturally brings the use of a physical environment to the forefront, since it shows the camera wearer interacting fluidly in a space based on his intentions. However, current methods largely separate the observed actions from the persistent space itself. We introduce a model for environment affordances that is learned directly from egocentric video. The main idea is to gain a human-centric model of a physical space (such as a kitchen) that captures (1) the primary spatial zones of interaction and (2) the likely activities they support. Our approach decomposes a space into a topological map derived from first-person activity, organizing an ego-video into a series of visits to the different zones. Further, we show how to link zones across multiple related environments (e.g., from videos of multiple kitchens) to obtain a consolidated representation of environment functionality. On EPIC-Kitchens and EGTEA+, we demonstrate our approach for learning scene affordances and anticipating future actions in long-form video. Project page: http://vision.cs.utexas.edu/projects/ego-topo/

Download the Paper

AUTHORS

Written by

Tushar Nagarajan

Yanghao Li

Christoph Feichtenhofer

Kristen Grauman

Publisher

Conference on Computer Vision and Pattern Recognition (CVPR)

Research Topics

Computer Vision

Recent Publications

June 16, 2020

COMPUTER VISION

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

Due to memory limitations in current hardware, previous approaches tend to take low resolution images as input to cover large spatial context, and produce less precise (or low resolution) 3D estimates as a result. We address this limitation by formulating a multi-level architecture that is end-to-end trainable

Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

June 16, 2020

June 14, 2020

COMPUTER VISION

ViBE: Dressing for Diverse Body Shapes

We introduce ViBE, a VIsual Body-aware Embedding that captures clothing’s affinity with different body shapes

Wei-Lin Hsiao, Kristen Grauman

June 14, 2020

June 14, 2020

COMPUTER VISION

Designing Network Design Spaces

In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings.

Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollar

June 14, 2020

June 14, 2020

COMPUTER VISION

EGO-TOPO: Environment Affordances from Egocentric Video

We introduce a model for environment affordances that is learned directly from egocentric video

Tushar Nagarajan, Yanghao Li, Christoph Feichtenhofer, Kristen Grauman

June 14, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.