Core Machine Learning

ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases

July 18, 2021

Abstract

Convolutional architectures have proven extremely successful for vision tasks. Their hard inductive biases enable sample-efficient learning, but come at the cost of a potentially lower performance ceiling. Vision Transformers (ViTs) rely on more flexible self-attention layers, and have recently outperformed CNNs for image classification. However, they require costly pre-training on large external datasets or distillation from pretrained convolutional networks. In this paper, we ask the following question: is it possible to combine the strengths of these two architectures while avoiding their respective limitations? To this end, we introduce gated positional self-attention (GPSA), a form of positional self-attention which can be equipped with a “soft” convolutional inductive bias. We initialize the GPSA layers to mimic the locality of convolutional layers, then give each attention head the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information. The resulting convolutional-like ViT architecture, ConViT, outperforms the DeiT (Touvron et al., 2020) on ImageNet, while offering a much improved sample efficiency. We further investigate the role of locality in learning by first quantifying how it is encouraged in vanilla self-attention layers, then analyzing how it is escaped in GPSA layers. We conclude by presenting various ablations to better understand the success of the ConViT. Our code and models are released publicly at https://github.com/facebookresearch/convit.

Download the Paper

AUTHORS

Written by

Stephane d’Ascoli

Hugo Touvron

Matthew L. Leavitt

Ari S. Morcos

Giulio Biroli

Levent Sagun

Publisher

ICML 2021

Research Topics

Core Machine Learning

Related Publications

November 27, 2022

Core Machine Learning

Neural Attentive Circuits

Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio

November 27, 2022

November 16, 2022

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

November 08, 2022

Theory

Beyond neural scaling laws: beating power law scaling via data pruning

Ari Morcos, Shashank Shekhar, Surya Ganguli, Ben Sorscher, Robert Geirhos

November 08, 2022

August 08, 2022

Core Machine Learning

Opacus: User-Friendly Differential Privacy Library in PyTorch

Ashkan Yousefpour, Akash Bharadwaj, Alex Sablayrolles, Graham Cormode, Igor Shilov, Ilya Mironov, Jessica Zhao, John Nguyen, Karthik Prasad, Mani Malek, Sayan Ghosh

August 08, 2022

December 07, 2020

Core Machine Learning

Adversarial Example Games

Avishek Joey Bose, Gauthier Gidel, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton

December 07, 2020

November 03, 2020

Core Machine Learning

Robust Embedded Deep K-means Clustering

Rui Zhang, Hanghang Tong Yinglong Xia, Yada Zhu

November 03, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.