RESEARCH

NLP

SUPERB: Speech processing Universal PERformance Benchmark

August 30, 2021

Abstract

Self-supervised learning (SSL) has proven vital for advancing research in natural language processing (NLP) and computer vision (CV). The paradigm pretrains a shared model on large volumes of unlabeled data and achieves state-of-the-art (SOTA) for various tasks with minimal adaptation. However, the speech processing community lacks a similar setup to systematically explore the paradigm. To bridge this gap, we introduce Speech processing Universal PERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. Among multiple usages of the shared model, we especially focus on extracting the representation learned from SSL due to its preferable re-usability. We present a simple framework to solve SUPERB tasks by learning task-specialized lightweight prediction heads on top of the frozen shared model. Our results demonstrate that the framework is promising as SSL representations show competitive generalizability and accessibility across SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a benchmark toolkit to fuel the research in representation learning and general speech processing.

Download the Paper

AUTHORS

Written by

Shu-wen Yang

Po-Han Chi

Yung-Sung Chuang

Cheng-I Jeff Lai

Kushal Lakhotia

Yist Y. Lin

Andy T. Liu

Jiatong Shi

Xuankai Chang

Guan-Ting Lin

Tzu-Hsien Huang

Wei-Cheng Tseng

Ko-tik Lee

Da-Rong Liu

Zili Huang

Shuyan Dong

Shang-Wen

Shinji Watanabe

Abdelrahman Mohamed

Hung-yi Lee

Publisher

INTERSPEECH 2021

Related Publications

September 10, 2019

NLP

Bridging the Gap Between Relevance Matching and Semantic Matching for Short Text Similarity Modeling | Facebook AI Research

A core problem of information retrieval (IR) is relevance matching, which is to rank documents by relevance to a user’s query. On the other hand, many NLP problems, such as question answering and paraphrase identification, can be considered…

Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin

September 10, 2019

June 11, 2019

NLP

COMPUTER VISION

Adversarial Inference for Multi-Sentence Video Description | Facebook AI Research

While significant progress has been made in the image captioning task, video description is still in its infancy due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the…

Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach

June 11, 2019

May 17, 2019

NLP

Unsupervised Hyper-alignment for Multilingual Word Embeddings | Facebook AI Research

We consider the problem of aligning continuous word representations, learned in multiple languages, to a common space. It was recently shown that, in the case of two languages, it is possible to learn such a mapping without supervision. This…

Jean Alaux, Edouard Grave, Marco Cuturi, Armand Joulin

May 17, 2019

July 27, 2019

NLP

Unsupervised Question Answering by Cloze Translation | Facebook AI Research

Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is…

Patrick Lewis, Ludovic Denoyer, Sebastian Riedel

July 27, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.