RESEARCH

NLP

Weak-Attention Suppression For Transformer Based Speech Recognition

October 26, 2020

Abstract

Transformers, originally proposed for natural language processing (NLP) tasks, have recently achieved great success in automatic speech recognition (ASR). However, adjacent acoustic units (i.e., frames) are highly correlated, and long-distance dependencies between them are weak, unlike text units. It suggests that ASR will likely benefit from sparse and localized attention. In this paper, we propose Weak-Attention Suppression (WAS), a method that dynamically induces sparsity in attention probabilities. We demonstrate that WAS leads to consistent Word Error Rate (WER) improvement over strong transformer baselines. On the widely used LibriSpeech benchmark, our proposed method reduced WER by 10% on test-clean and 5% on test-other for streamable transformers, resulting in a new state-of-the-art among streaming models. Further analysis shows that WAS learns to suppress attention of non-critical and redundant continuous acoustic frames, and is more likely to suppress past frames rather than future ones. It indicates the importance of lookahead in attention-based ASR models.

Download the Paper

AUTHORS

Written by

Yangyang Shi

Yongqiang Wang

Chunyang Wu

Christian Fuegen

Frank Zhang

Duc Le

Ching-Feng Yeh

Michael L. Seltzer

Publisher

Interspeech

Related Publications

June 02, 2019

SPEECH & AUDIO

NLP

The emergence of number and syntax units in LSTM language models | Facebook AI Research

Recent work has shown that LSTMs trained on a generic language modeling objective capture syntax-sensitive generalizations such as long-distance number agreement. We have however no mechanistic understanding of how they accomplish this…

Yair Lakretz, Germán Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, Marco Baroni

June 02, 2019

June 01, 2019

SPEECH & AUDIO

NLP

Neural Models of Text Normalization for Speech Applications | Facebook AI Research

Machine learning, including neural network techniques, have been applied to virtually every domain in natural language processing. One problem that has been somewhat resistant to effective machine learning solutions is text normalization for…

Hao Zhang, Richard Sproat, Axel H. Ng, Felix Stahlberg, Xiaochang Peng, Kyle Gorman, Brian Roark

June 01, 2019

May 17, 2019

COMPUTER VISION

SPEECH & AUDIO

GLoMo: Unsupervised Learning of Transferable Relational Graphs | Facebook AI Research

Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However,…

Zhilin Yang, Jake (Junbo) Zhao, Bhuwan Dhingra, Kaiming He, William W. Cohen, Ruslan Salakhutdinov, Yann LeCun

May 17, 2019

May 06, 2019

COMPUTER VISION

NLP

No Training Required: Exploring Random Encoders for Sentence Classification | Facebook AI Research

We explore various methods for computing sentence representations from pre-trained word embeddings without any training, i.e., using nothing but random parameterizations. Our aim is to put sentence embeddings on more solid footing by 1) looking…

John Wieting, Douwe Kiela

May 06, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.