CORE MACHINE LEARNING

Understanding contrastive versus reconstructive self-supervised learning of Vision Transformers

November 08, 2022

Abstract

While self-supervised learning on Vision Transformers (ViTs) has led to state-of-the-art results on image classification benchmarks, there has been little research on understanding the differences in representations that arise from different training methods. We address this by utilizing Centered Kernel Alignment for comparing neural representations learned by contrastive learning and reconstructive learning, two leading paradigms for self-supervised learning. We find that the representations learned by reconstructive learning are significantly dissimilar from representations learned by contrastive learning. We analyze these differences, and find that they start to arise early in the network depth and are driven mostly by the attention and normalization layers in a transformer block. We also find that these representational differences translate to class predictions and linear separability of classes in the pretrained models. Finally, we analyze how fine-tuning affects these representational differences, and discover that a fine-tuned reconstructive model becomes more similar to a pre-trained contrastive model.

Download the Paper

AUTHORS

Written by

Ari Morcos

Florian Bordes

Pascal Vincent

Shashank Shekhar

Publisher

NeurIPS SSL Workshop

Research Topics

Core Machine Learning

Related Publications

February 15, 2024

RANKING AND RECOMMENDATIONS

CORE MACHINE LEARNING

TASER: Temporal Adaptive Sampling for Fast and Accurate Dynamic Graph Representation Learning

Danny Deng, Hongkuan Zhou, Hanqing Zeng, Yinglong Xia, Chris Leung (AI), Jianbo Li, Rajgopal Kannan, Viktor Prasanna

February 15, 2024

February 15, 2024

CORE MACHINE LEARNING

Revisiting Feature Prediction for Learning Visual Representations from Video

Adrien Bardes, Quentin Garrido, Xinlei Chen, Michael Rabbat, Yann LeCun, Mido Assran, Nicolas Ballas, Jean Ponce

February 15, 2024

January 09, 2024

CORE MACHINE LEARNING

Accelerating a Triton Fused Kernel for W4A16 Quantized Inference with SplitK Work Decomposition

Less Wright, Adnan Hoque

January 09, 2024

January 06, 2024

RANKING AND RECOMMENDATIONS

REINFORCEMENT LEARNING

Learning to bid and rank together in recommendation systems

Geng Ji, Wentao Jiang, Jiang Li, Fahmid Morshed Fahid, Zhengxing Chen, Yinghua Li, Jun Xiao, Chongxi Bao, Zheqing (Bill) Zhu

January 06, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.