RESEARCH

SPEECH & AUDIO

Learning Word Vectors for 157 Languages

March 23, 2019

Abstract

Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.

Download the Paper

AUTHORS

Written by

Edouard Grave

Armand Joulin

Piotr Bojanowski

Tomas Mikolov

Prakhar Gupta

Publisher

LREC

Related Publications

December 15, 2021

RESEARCH

Sample-and-threshold differential privacy: Histograms and applications

Akash Bharadwaj, Graham Cormode

December 15, 2021

August 30, 2021

SPEECH & AUDIO

NLP

A Two-stage Approach to Speech Bandwidth Extension

Yun Wang, Christian Fuegen, Didi Zhang, Gil Keren, Kaustubh Kalgaonkar, Ju Lin

August 30, 2021

January 09, 2021

RESEARCH

COMPUTER VISION

Tarsier: Evolving Noise Injection in Super-Resolution GANs

Baptiste Rozière, Camille Couprie, Olivier Teytaud, Andry Rasoanaivo, Hanhe Lin, Nathanaël Carraz Rakotonirina, Vlad Hosu

January 09, 2021

January 09, 2021

RESEARCH

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

Jean Tarbouriech, Alessandro Lazaric, Matteo Pirotta, Michal Valko

January 09, 2021