Research

Speech & Audio

A Universal Music Translation Network

May 5, 2019

Abstract

We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training. We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.

Download the Paper

Related Publications

July 28, 2019

Speech & Audio

Computer Vision

Learning to Optimize Halide with Tree Search and Random Programs | Facebook AI Research

Andrew Adams, Karima Ma, Luke Anderson, Riyadh Baghdadi, Tzu-Mao Li, Michaël Gharbi, Benoit Steiner, Steven Johnson, Kayvon Fatahalian, Frédo Durand, Jonathan Ragan-Kelley

July 28, 2019

September 15, 2019

Speech & Audio

Who Needs Words? Lexicon-Free Speech Recognition | Facebook AI Research

Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert

September 15, 2019

December 04, 2018

Speech & Audio

Non-Adversarial Mapping with VAEs | Facebook AI Research

Yedid Hoshen

December 04, 2018

May 01, 2019

Speech & Audio

Learning graphs from data: A signal representation perspective | Facebook AI Research

Xiaowen Dong, Dorina Thanou, Michael Rabbat, Pascal Frossard

May 01, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.