RESEARCH

SPEECH & AUDIO

Attention-Based WaveNet Autoencoder for Universal Voice Conversion

April 30, 2019

Abstract

We present a method for converting any voice to a target voice. The method is based on a WaveNet autoencoder, with the addition of a novel attention component that supports the modification of timing between the input and the output samples. Training the attention is done in an unsupervised way, by teaching the neural network to recover the original timing from an artificially modified one. Adding a generic voice robot, which we convert to the target voice, we present a robust Text To Speech pipeline that is able to train without any transcript. Our experiments show that the proposed method is able to recover the timing of the speaker and that the proposed pipeline provides a competitive Text To Speech method.

Download the Paper

Related Publications

SPEECH & AUDIO

Who Needs Words? Lexicon-Free Speech Recognition | Facebook AI Research

Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert

SPEECH & AUDIO

COMPUTER VISION

Learning to Optimize Halide with Tree Search and Random Programs | Facebook AI Research

Andrew Adams, Karima Ma, Luke Anderson, Riyadh Baghdadi, Tzu-Mao Li, Michaël Gharbi, Benoit Steiner, Steven Johnson, Kayvon Fatahalian, Frédo Durand, Jonathan Ragan-Kelley

SPEECH & AUDIO

Learning graphs from data: A signal representation perspective | Facebook AI Research

Xiaowen Dong, Dorina Thanou, Michael Rabbat, Pascal Frossard

SPEECH & AUDIO

Attention-Based WaveNet Autoencoder for Universal Voice Conversion | Facebook AI Research

Adam Polyak, Lior Wolf

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.