November 29, 2023
Massively multilingual and multimodal sentence representations like SONAR are usually trained to capture only the meaning of the encoded text or speech. We complement this semantic embedding by a generic speech characteristic embedding which captures the expressive properties of a speech signal. We describe an iterative training procedure which aims to disentangle the semantics and expressive speech properties, and which does not need labeled data. We show the effectiveness of our method on the FLEURS and mExpresso benchmark test sets using multiple metrics which aim to measure the preservation of the meaning and prosody for zero-shot speech-to-speech translation from five languages into English.
Written by
Paul-Ambroise Duquenne
Kevin Heffernan
Alexandre Mourachko
Benoit Sagot (INRIA)
Publisher
arXiv
Research Topics
April 14, 2024
Heng-Jui Chang, Ning Dong (AI), Ruslan Mavlyutov, Sravya Popuri, Andy Chung
April 14, 2024
February 21, 2024
Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon
February 21, 2024
December 07, 2023
Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Davide Testuggine, Madian Khabsa
December 07, 2023
December 06, 2023
Mattia Atzeni, Mike Plekhanov, Frederic Dreyer, Nora Kassner, Simone Merello, Louis Martin, Nicola Cancedda
December 06, 2023
Product experiences
Foundational models
Product experiences
Latest news
Foundational models