RESEARCH

NLP

Effectiveness of self-supervised pre-training for ASR

May 4, 2020

Abstract

We compare self-supervised representation learning algorithms which either explicitly quantize the audio data or learn representations without quantization. We find the former to be more accurate since it builds a good vocabulary of the data through vq-wav2vec [1] self-supervision approach to enable learning of effective representations in subsequent BERT training. Different to previous work, we directly fine-tune the pre-trained BERT models on transcribed speech using a Connectionist Temporal Classification (CTC) loss instead of feeding the representations into a task-specific model. We also propose a BERT-style model learning directly from the continuous audio data and compare pre-training on raw audio to spectral features. Fine-tuning a BERT model on 10 hour of labeled Librispeech data with a vq-wav2vec vocabulary is almost as good as the best known reported system trained on 100 hours of labeled data on test-clean, while achieving a 25% WER reduction on test-other. When using only 10 minutes of labeled data, WER is 25.2 on test-other and 16.3 on test-clean. This demonstrates that self-supervision can enable speech recognition systems trained on a near-zero amount of transcribed data.

Download the Paper

AUTHORS

Written by

Alexei Baevski

Abdelrahman Mohamed

Publisher

International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

Recent Publications

January 01, 2021

Asynchronous Gradient-Push | Facebook AI Research

We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’…

Mahmoud Assran, Michael Rabbat

January 01, 2021

August 22, 2020

GrokNet: Unified Computer Vision Model Trunk and Embeddings For Commerce

In this paper we propose image classification modeling technique targeted for marketplace. We use public posts from marketplace and search log interactions for training image classifier and achieve significant improvements in e-commerce in comparison to previous version of our image classifier.

Sean Bell, Yiqun Liu, Sami Alsheikh, Yina Tang, Ed Pizzi, M. Henning, Karun Singh, Omkar Parkhi, Fedor Borisyuk

August 22, 2020

June 16, 2020

COMPUTER VISION

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

Due to memory limitations in current hardware, previous approaches tend to take low resolution images as input to cover large spatial context, and produce less precise (or low resolution) 3D estimates as a result. We address this limitation by formulating a multi-level architecture that is end-to-end trainable

Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

June 16, 2020

June 14, 2020

Iterative Answer Prediction with Pointer-Augmented Multimodal Transformers for TextVQA | Facebook AI Research

Many visual scenes contain text that carries crucial information, and it is thus essential to understand text in images for downstream reasoning tasks. For example, a deep water label on a warning sign warns people about the danger in the…

Ronghang Hu, Amanpreet Singh, Trevor Darrell, Marcus Rohrbach

June 14, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.