COMPUTER VISION

CORE MACHINE LEARNING

Grounding inductive biases in natural images: invariance stems from variations in data

November 09, 2021

Abstract

To perform well on unseen and potentially out-of-distribution samples, it is desirable for machine learning models to have a predictable response with respect to transformations affecting the factors of variation of the input. Here, we study the relative importance of several types of inductive biases towards such predictable behavior: the choice of data, their augmentations, and model architectures. Invariance is commonly achieved through hand-engineered data augmentation, but do standard data augmentations address transformations that explain variations in real data? While prior work has focused on synthetic data, we attempt here to characterize the factors of variation in a real dataset, ImageNet, and study the invariance of both standard residual networks and the recently proposed vision transformer with respect to changes in these factors. We show standard augmentation relies on a precise combination of translation and scale, with translation recapturing most of the performance improvement---despite the (approximate) translation invariance built in to convolutional architectures, such as residual networks. In fact, we found that scale and translation invariance was similar across residual networks and vision transformer models despite their markedly different architectural inductive biases. We show the training data itself is the main source of invariance, and that data augmentation only further increases the learned invariances. Notably, the invariances learned during training align with the ImageNet factors of variation we found. Finally, we find that the main factors of variation in ImageNet mostly relate to appearance and are specific to each class.

Download the Paper

AUTHORS

Written by

Diane Bouchacourt

Mark Ibrahim

Ari Morcos

Publisher

NeurIPS

Research Topics

Computer Vision

Core Machine Learning

Related Publications

December 06, 2021

COMPUTER VISION

CORE MACHINE LEARNING

Debugging the Internals of Convolutional Networks

Bilal Alsallakh, Narine Kokhlikyan, Vivek Miglani, Shubham Muttepawar, Edward Wang (AI Infra), Sara Zhang, David Adkins, Orion Reblitz-Richardson

December 06, 2021

December 06, 2021

CORE MACHINE LEARNING

Revisiting Graph Neural Networks for Link Prediction

Yinglong Xia

December 06, 2021

December 06, 2021

COMPUTER VISION

Early Convolutions Help Transformers See Better

Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollar, Ross Girshick

December 06, 2021

December 06, 2021

INTEGRITY

CORE MACHINE LEARNING

BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining

Weizhe Hua, Yichi Zhang, Chuan Guo, Zhiru Zhang, Edward Suh

December 06, 2021