July 09, 2020
We propose a direct-to-word sequence model which uses a word network to learn word embeddings from letters. The word network can be integrated seamlessly with arbitrary sequence models including Connectionist Temporal Classification and encoder-decoder models with attention. We show our direct-to-word model can achieve word error rate gains over sub-word level models for speech recognition. We also show that our direct-to-word approach retains the ability to predict words not seen at training time without any retraining. Finally, we demonstrate that a word-level model can use a larger stride than a sub-word level model while maintaining accuracy. This makes the model more efficient both for training and inference.
Publisher
ICML
Research Topics
April 14, 2024
Heng-Jui Chang, Ning Dong (AI), Ruslan Mavlyutov, Sravya Popuri, Andy Chung
April 14, 2024
February 21, 2024
Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon
February 21, 2024
December 07, 2023
Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Davide Testuggine, Madian Khabsa
December 07, 2023
December 06, 2023
Mattia Atzeni, Mike Plekhanov, Frederic Dreyer, Nora Kassner, Simone Merello, Louis Martin, Nicola Cancedda
December 06, 2023
Product experiences
Foundational models
Product experiences
Latest news
Foundational models