RESEARCH

NLP

AIPNet: Generative Adversarial Pre-Training of Accent-Invariant Network for End-to-End Speech Recognition

April 24, 2020

Abstract

As one of the major sources in speech variability, accents have posed a grand challenge to the robustness of speech recognition systems. In this paper, our goal is to build a unified end-to-end speech recognition system that generalizes well across accents. For this purpose, we propose a novel pre-training framework AIPNet based on generative adversarial nets (GAN) for accent-invariant representation learning: Accent Invariant Pre-training Networks. We pre-train AIPNet to disentangle accent-invariant and accent-specific characteristics from acoustic features through adversarial training on accented data for which transcriptions are not necessarily available. We further fine-tune AIPNet by connecting the accent-invariant module with an attention-based encoder-decoder model for multiaccent speech recognition. In the experiments, our approach is compared against four baselines including both accent-dependent and accent-independent models. Experimental results on 9 English accents show that the proposed approach outperforms all the baselines by 2.3 ∼ 4.5% relative reduction on average WER when transcriptions are available in all accents and by 1.6 ∼ 6.1% relative reduction when transcriptions are only available in US accent.

Download the Paper

AUTHORS

Written by

Yi-Chen Chen

Zhaojun Yang

Ching-Feng Yeh

Mahaveer Jain

Michael L. Seltzer

Publisher

International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

Recent Publications

January 09, 2021

COMPUTER VISION

Tarsier: Evolving Noise Injection in Super-Resolution GANs

Super-resolution aims at increasing the resolution and level of detail within an image.…

Baptiste Roziere, Nathanaël Carraz Rakotonirina, Vlad Hosu, Andry Rasoanaivo, Hanhe Lin, Camille Couprie, Olivier Teytaud

January 09, 2021

January 01, 2021

Asynchronous Gradient-Push | Facebook AI Research

We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’…

Mahmoud Assran, Michael Rabbat

January 01, 2021

December 10, 2020

Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization

In many real-world scenarios, decision makers seek to efficiently optimize multiple competing objectives…

Samuel Daulton, Maximilian Balandat, Eytan Bakshy

December 10, 2020

December 10, 2020

Neural Sparse Voxel Fields

Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing…

Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, Christian Theobalt

December 10, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.