December 13, 2022
Current self-supervised learning algorithms are often modality-specific and require large amounts of computational resources. To address these issues, we increase the training efficiency of data2vec, a learning objective that generalizes across several modalities. We do not encode masked tokens, use a fast convolutional decoder and amortize the effort to build teacher representations. data2vec 2.0 benefits from the rich contextualized target representations introduced in data2vec which enable a fast self-supervised learner. Experiments on ImageNet-1K image classification show that data2vec 2.0 matches the accuracy of Masked Autoencoders in 16.4x lower pre-training time, on Librispeech speech recognition it performs as well as wav2vec 2.0 in 10.6x less time, and on GLUE natural language understanding it matches a retrained RoBERTa model in half the time. Trading some speed for accuracy results in ImageNet-1K top-1 accuracy of 86.8\% with a ViT-L model trained for 150 epochs.
Written by
Michael Auli
Alexei Baevski
Arun Babu
Wei-Ning Hsu
Publisher
arXiv
March 09, 2023
Bilal Porgali, VĂtor Albiero, Jordan Ryda, Cristian Canton Ferrer, Caner Hazirbas
March 09, 2023
February 24, 2023
Faisal Azhar, Hugo Touvron, Armand Joulin, Aurelien Rodriguez, Baptiste Rozière, Eric Hambro, Gautier Izacard, Guillaume Lample, Marie-Anne Lachaux, Naman Goyal, Thibaut Lavril, Timothee Lacroix, Xavier Martinet, Edouard Grave
February 24, 2023
February 21, 2023
Felix Xu, Fuyuan Zhang, Hua Qi, Jianjun Zhao, Jianlang Chen, Lei Ma, Qing Guo, Zhijie Wang
February 21, 2023
February 20, 2023
Maziar Sanjabi, Aaron Chan, Hamed Firooz, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren
February 20, 2023