May 5, 2019
We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training. We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.
Research Topics
July 28, 2019
Andrew Adams, Karima Ma, Luke Anderson, Riyadh Baghdadi, Tzu-Mao Li, Michaël Gharbi, Benoit Steiner, Steven Johnson, Kayvon Fatahalian, Frédo Durand, Jonathan Ragan-Kelley
July 28, 2019
September 15, 2019
Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert
September 15, 2019
December 04, 2018
Yedid Hoshen
December 04, 2018
May 01, 2019
Xiaowen Dong, Dorina Thanou, Michael Rabbat, Pascal Frossard
May 01, 2019
Latest Work
Our Actions
Newsletter