SPEECH & AUDIO

NLP

A Two-stage Approach to Speech Bandwidth Extension

August 30, 2021

Abstract

Algorithms for speech bandwidth extension (BWE) may work in either the time domain or the frequency domain. Time-domain methods often do not sufficiently recover the high-frequency content of speech signals; frequency-domain methods are better at recovering the spectral envelope, but have difficulty reconstructing the details of the waveform. In this paper, we propose a two-stage approach for BWE, which enjoys the advantages of both time- and frequency-domain methods. The first stage is a frequency-domain neural network, which predicts the high-frequency part of the wide-band spectrogram from the narrow-band input spectrogram. The wide-band spectrogram is then converted into a time-domain waveform, and passed through the second stage to refine the temporal details. For the first stage, we compare a convolutional recurrent network (CRN) with a temporal convolutional network (TCN), and find that the latter is able to capture long-span dependencies equally well as the former while using a lot fewer parameters. For the second stage, we enhance the Wave-U-Net architecture with a multi-resolution short-time Fourier transform (MSTFT) loss function. A series of comprehensive experiments show that the proposed system achieves superior performance in speech enhancement (measured by both time- and frequency-domain metrics) as well as speech recognition.

Download the Paper

AUTHORS

Written by

Yun Wang

Christian Fuegen

Didi Zhang

Gil Keren

Kaustubh Kalgaonkar

Ju Lin

Publisher

Interspeech

Related Publications

November 16, 2022

RESEARCH

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

October 31, 2022

NLP

ML APPLICATIONS

AD-Drop: Attribution Driven Dropout for Robust Language Model Finetuning

Qifan Wang, Shaoliang Nie, Jinghao Deng, Tao Yang, Xiaojun Quan

October 31, 2022

October 31, 2022

RESEARCH

NLP

Autoregressive Search Engines: Generating Substrings as Document Identifiers

Fabio Petroni, Giuseppe Ottaviano, Michele Bevilacqua, Patrick Lewis, Scott Yih, Sebastian Riedel

October 31, 2022

July 07, 2022

NLP

CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training

Xilun Chen, Armen Aghajanyan, Barlas Oguz, Scott Yih, Sonal Gupta, Patrick Huber

July 07, 2022

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.