No Language Left Behind

No Language Left Behind

Driving inclusion through the power of AI translation

Driving inclusion through the power of AI translation

Watch the video
Watch the video

About No Language

Left Behind

No Language Left Behind (NLLB) is a first-of-its-kind, AI breakthrough project that open-sources models capable of delivering evaluated, high-quality translations directly between 200 languages—including low-resource languages like Asturian, Luganda, Urdu and more. It aims to give people the opportunity to access and share web content in their native language, and communicate with anyone, anywhere, regardless of their language preferences.

About No Language Left Behind

No Language Left Behind (NLLB) is a first-of-its-kind, AI breakthrough project that open-sources models capable of delivering evaluated, high-quality translations directly between 200 languages—including low-resource languages like Asturian, Luganda, Urdu and more. It aims to give people the opportunity to access and share web content in their native language, and communicate with anyone, anywhere, regardless of their language preferences.

ai research for real-world application

Applying AI Techniques to Facebook and Instagram for translation of low-resource languages

We’re committed to bringing people together. That’s why we’re using modeling techniques and learnings from our NLLB research to improve translations of low-resource languages on Facebook and Instagram. By applying these techniques and learnings to our production translation systems, people will be able to make more authentic, more meaningful connections in their preferred or native languages. In the future, we hope to extend our learnings from NLLB to more Meta apps.

REAL-WORLD APPLICATION

Building for an inclusive metaverse

A translated metaverse: bringing people together on a global scale

As we build for the metaverse, integrating real-time AR/VR text translation in hundreds of languages is a priority. Our aim is to set a new standard of inclusion—where someday everyone can have access to virtual-world content, devices and experiences, with the ability to communicate with anyone, in any language in the metaverse. And over time, bring people together on a global scale.

REAL-WORLD APPLICATION

Translating Wikipedia for everyone

Helping volunteer editors make information available in more languages

The technology behind the NLLB-200 model, now available through the Wikimedia Foundation’s Content Translation tool, is supporting Wikipedia editors as they translate information into their native and preferred languages. Wikipedia editors are using the technology to more efficiently translate and edit articles originating in other under-represented languages, such as Luganda and Icelandic. This helps to make more knowledge available in more languages for Wikipedia readers around the world. The open-source NLLB-200 model will also help researchers and interested Wikipedia editor communities build on our work.

Experience the Tech

Stories Told Through Translation:

books from around the world translated into hundreds of languages

Stories Told Through Translation:

books from around the world translated into hundreds of languages

Experience the power of AI translation with Stories Told Through Translation, our demo that uses the latest AI advancements from the No Language Left Behind project. This demo translates books from their languages of origin such as Indonesian, Somali and Burmese, into more languages for readers—with hundreds available in the coming months. Through this initiative, the NLLB-200 will be the first-ever AI model able to translate literature at this scale.

The Rose Village

By Su Nyein Chan

A farmer lives in a village that only grows red roses. What will happen when he plants strange seeds from a box he finds in his basement?

Read Story
The Elephant in My House

By Prum Kunthearo

When a baby elephant runs into their house, Botom is jealous by how much attention he gets. Can Botom get rid of the elephant, or will she become friends with the lovable creature as well?

Read Story
What Could I Become?

By Nabila Adani

A girl is inspired by a school assignment to think about what she wants to be when she grows up. What will her dreams inspire her to become?

Read Story
Samad in the forest

By Mohammed Umar

Samad loved animals. His dream was to spend a whole day in a forest and sleep in the treehouse. Follow Samad as he embarked on this adventure where he made wonderful friends and amazing discoveries. Going into a forest has never been so much fun.

Read Story
The Prince and the Tiger

By Wulan Mulya Pratiwi

The prince is lost in the forest. A tiger is tracking him. What will he do?

Read Story

The Tech

Machine translation explained

How does the open-source NLLB model directly translate 200 languages?

STAGE 1

Automatic dataset construction

Stage 1: Automatic dataset construction

Training data is collected containing sentences in the input language and desired output language.

Something Went Wrong
We're having trouble playing this video.

STAGE 2

Training

Stage 2: Training

After creating aligned training data for thousands of training directions, this data is fed into our model training pipeline. These models are made up of two parts: the encoder, which converts the input sentence into an internal vector representation; and the decoder, which takes this internal vector representation and accurately generates the output sentence. By training on millions of example translations, models learn to generate more accurate translations.

Something Went Wrong
We're having trouble playing this video.

STAGE 3

Evaluation

Stage 3: Evaluation

Finally, we evaluate our model against a human-translated set of sentence translations to confirm that we are satisfied with the translation quality. This includes detecting and filtering out profanity and other offensive content through the use of toxicity lists we build for all supported languages. The result is a well-trained model that can directly translate a language.

Something Went Wrong
We're having trouble playing this video.

STAGE 1

Automatic dataset construction

STAGE 2

Training

STAGE 3

Evaluation

Stage 1: Automatic dataset construction

Training data is collected containing sentences in the input language and desired output language.

Something Went Wrong
We're having trouble playing this video.

Stage 2: Training

After creating aligned training data for thousands of training directions, this data is fed into our model training pipeline. These models are made up of two parts: the encoder, which converts the input sentence into an internal vector representation; and the decoder, which takes this internal vector representation and accurately generates the output sentence. By training on millions of example translations, models learn to generate more accurate translations.

Something Went Wrong
We're having trouble playing this video.

Stage 3: Evaluation

Finally, we evaluate our model against a human-translated set of sentence translations to confirm that we are satisfied with the translation quality. This includes detecting and filtering out profanity and other offensive content through the use of toxicity lists we build for all supported languages. The result is a well-trained model that can directly translate a language.

Something Went Wrong
We're having trouble playing this video.

The Innovations

The science behind the breakthrough

Most of today’s machine translation (MT) models work for mid- to high-resource languages—leaving most low-resource languages behind. AI at Meta researchers are addressing this issue with three significant AI innovations.

Automatic dataset construction for low-resource languages

The context

MT is a supervised learning task, which means the model needs data to learn from. Example translations from open-source data collections are often used. Our solution is to automatically construct translation pairs by pairing sentences in different collections of monolingual documents.

The challenge

The LASER models used for this dataset creation process primarily support mid- to high-resource languages, making it impossible to produce accurate translation pairs for low-resource languages.

The innovation

We solved this by investing in a teacher-student training procedure, making it possible to 1) extend LASER’s language coverage to 200 languages, and 2) produce a massive amount of data, even for low resource languages.

Modeling 200 languages

The context

Multilingual MT systems have been improved upon over bilingual systems. This is due to their ability to enable "transfer" from language pairs with plenty of training data, to other languages with fewer training resources.

The challenge

Jointly training hundreds of language pairs together has its disadvantages, as the same model must represent increasingly large numbers of languages with the same number of parameters. This is an issue when the dataset sizes are imbalanced, as it can cause overfitting.

The innovation

We’ve developed a Sparse Mixture-of-Experts model that has a shared and specialized capacity, so low-resource languages without much data can be automatically routed to the shared capacity. When combined with better regularization systems, this avoids overfitting. Further, we used self-supervised learning and large-scale data augmentation through multiple types of back translation.

Evaluating translation quality

The context

To know if a translation produced by our model meets our quality standards, we must evaluate it.

The challenge

Machine translation models are typically evaluated by comparing machine-translated sentences with human translations, however for many languages, reliable translation data is not available. So accurate evaluations are not possible.

The innovation

We extended 2x the coverage of FLORES, a human-translated evaluation benchmark, to now cover 200 languages. Through automatic metrics and human evaluation support, we’re able to extensively quantify the quality of our translations.
Learn more about the science behind NLLB by reading our whitepaper and blog, and by downloading the model to help us take this project further.

Learn more about the science behind NLLB by reading our whitepaper and blog, and by downloading the model to help us take this project further.

The Journey

Research milestones
Research milestones

AI at Meta has been advancing Machine Translation technology while successfully overcoming numerous industry challenges along the way—from the unavailability of data for low-resource languages to translation quality and accuracy. Our journey continues, as we drive inclusion through the power of AI translation.

AI at Meta has been advancing Machine Translation technology while successfully overcoming numerous industry challenges along the way—from the unavailability of data for low-resource languages to translation quality and accuracy. Our journey continues, as we drive inclusion through the power of AI translation.

See model milestones by # of languages released

< 50 languages

50-99 languages

100 languages

LASER (Language-agnostic sentence representations)

2018

The first successful exploration of massively multilingual sentence representations shared publicly with the NLP community. The encoder creates embeddings to automatically pair up sentences sharing the same meaning in 50 languages.

Data Encoders

WMT-19

2019

FB AI models outperformed all other models at WMT 2019, using large-scale sampled back-translation, noisy channel modeling and data cleaning techniques to help build a strong system.

Model

Flores V1

2019

A benchmarking dataset for MT between English and low-resource languages introducing a fair and rigorous evaluation process, starting with 2 languages.

Evaluation Dataset

WikiMatrix

2019

The largest extraction of parallel sentences across multiple languages: Bitext extraction of 135 million Wikipedia sentences in 1,620 language pairs for building better translation models.

Data Construction

M2M-100

2020

The first, single multilingual machine translation model to directly translate between any pair of 100 languages without relying on English data. Trained on 2,200 language directions —10x more than previous multilingual models.

Model

CCMatrix

2020

The largest dataset of high-quality, web-based bitexts for building better translation models that work with more languages, especially low-resource languages: 4.5 billion parallel sentences in 576 language pairs.

Data Construction

LASER 2

2020

Creates embeddings to automatically pair up sentences sharing the same meaning in 100 languages.

Data Encoders

WMT-21

2021

For the first time, a single multilingual model outperformed the best specially trained bilingual models across 10 out of 14 language pairs to win WMT 2021, providing the best translations for both low- and high-resource languages.

Model

FLORES-101

2021

FLORES-101 is the first-of-its-kind, many-to-many evaluation data set covering 101 languages, enabling researchers to rapidly test and improve upon multilingual translation models like M2M-100.

Evaluation Dataset

NLLB-200

2022

The NLLB model translates 200 languages.

Model

FLORES 200

2021

Expansion of FLORES evaluation data set now covering 200 languages

Evaluation Dataset

NLLB-Data-200

2022

Constructed and released training data for 200 languages

Evaluation Dataset

LASER 3

2022

Creates embeddings to automatically pair up sentences sharing the same meaning in 200 languages.

Data Encoders

< 50 languages

50-100 languages

100 languages

LASER (Language-agnostic sentence representations)

2018

The first successful exploration of massively multilingual sentence representations shared publicly with the NLP community. The encoder creates embeddings to automatically pair up sentences sharing the same meaning in 50 languages.

Data Encoders

WMT-19

2019

FB AI models outperformed all other models at WMT 2019, using large-scale sampled back-translation, noisy channel modeling and data cleaning techniques to help build a strong system.

Model

Flores V1

2019

A benchmarking dataset for MT between English and low-resource languages introducing a fair and rigorous evaluation process, starting with 2 languages.

Evaluation Dataset

WikiMatrix

2019

The largest extraction of parallel sentences across multiple languages: Bitext extraction of 135 million Wikipedia sentences in 1,620 language pairs for building better translation models.

Data Construction

M2M-100

2020

The first, single multilingual machine translation model to directly translate between any pair of 100 languages without relying on English data. Trained on 2,200 language directions —10x more than previous multilingual models.

Model

CCMatrix

2020

The largest dataset of high-quality, web-based bitexts for building better translation models that work with more languages, especially low-resource languages: 4.5 billion parallel sentences in 576 language pairs.

Data Construction

LASER 2

2020

Creates embeddings to automatically pair up sentences sharing the same meaning in 100 languages.

Data Encoders

WMT-21

2021

For the first time, a single multilingual model outperformed the best specially trained bilingual models across 10 out of 14 language pairs to win WMT 2021, providing the best translations for both low- and high-resource languages.

Model

FLORES-101

2021

FLORES-101 is the first-of-its-kind, many-to-many evaluation data set covering 101 languages, enabling researchers to rapidly test and improve upon multilingual translation models like M2M-100.

Evaluation Dataset

NLLB-200

2022

The NLLB model translates 200 languages.

Model

FLORES 200

2021

Expansion of FLORES evaluation data set now covering 200 languages

Evaluation Dataset

NLLB-Data-200

2022

Constructed and released training data for 200 languages

Evaluation Dataset

LASER 3

2022

Creates embeddings to automatically pair up sentences sharing the same meaning in 200 languages.

Data Encoders

From Assamese, Balinese and Estonian…to Icelandic, Igbo and more. 200 languages and counting…

Have a look at the full list of languages our NLLB-200 model supports—with 150 low-resource languages included. More will be added to this list as we, and our community, continue on this journey of inclusiveness through AI translation.

Full list of supported languages

Acehnese (Latin script)

Arabic (Iraqi/Mesopotamian)

Arabic (Yemen)

Arabic (Tunisia)

Afrikaans

Arabic (Jordan)

Akan

Amharic

Arabic (Lebanon)

Arabic (MSA)

Arabic (Modern Standard Arabic)

Arabic (Saudi Arabia)

Arabic (Morocco)

Arabic (Egypt)

Assamese

Asturian

Awadhi

Aymara

Crimean Tatar

Welsh

Danish

German

French

Friulian

Fulfulde

Dinka(Rek)

Dyula

Dzongkha

Greek

English

Esperanto

Estonian

Basque

Ewe

Faroese

Iranian Persian

Icelandic

Italian

Javanese

Japanese

Kabyle

Kachin | Jinghpo

Kamba

Kannada

Kashmiri (Arabic script)

Kashmiri (Devanagari script)

Georgian

Kanuri (Arabic script)

Kanuri (Latin script)

Kazakh

Kabiye

Thai

Khmer

Kikuyu

South Azerbaijani

North Azerbaijani

Bashkir

Bambara

Balinese

Belarusian

Bemba

Bengali

Bhojpuri

Banjar (Latin script)

Tibetan

Bosnian

Buginese

Bulgarian

Catalan

Cebuano

Czech

Chokwe

Central Kurdish

Fijian

Finnish

Fon

Scottish Gaelic

Irish

Galician

Guarani

Gujarati

Haitian Creole

Hausa

Hebrew

Hindi

Chhattisgarhi

Croatian

Hugarian

Armenian

Igobo

IIocano

Indonesian

Kinyarwanda

Kyrgyz

Kimbundu

Konga

Korean

Kurdish (Kurmanji)

Lao

Latvian (Standard)

Ligurian

Limburgish

Lingala

Lithuanian

Lombard

Latgalian

Luxembourgish

Luba-Kasai

Ganda

Dholuo

Mizo

Full list of supported languages

Acehnese (Latin script)

Arabic (Iraqi/Mesopotamian)

Arabic (Yemen)

Arabic (Tunisia)

Afrikaans

Arabic (Jordan)

Akan

Amharic

Arabic (Lebanon)

Arabic (MSA)

Arabic (Modern Standard Arabic)

Arabic (Saudi Arabia)

Arabic (Morocco)

Arabic (Egypt)

Assamese

Asturian

Awadhi

Aymara

Crimean Tatar

Welsh

Danish

German

French

Friulian

Fulfulde

Dinka(Rek)

Dyula

Dzongkha

Greek

English

Esperanto

Estonian

Basque

Ewe

Faroese

Iranian Persian

Icelandic

Italian

Javanese

Japanese

Kabyle

Kachin | Jinghpo

Kamba

Kannada

Kashmiri (Arabic script)

Kashmiri (Devanagari script)

Georgian

Kanuri (Arabic script)

Kanuri (Latin script)

Kazakh

Kabiye

Thai

Khmer

Kikuyu

South Azerbaijani

North Azerbaijani

Bashkir

Bambara

Balinese

Belarusian

Bemba

Bengali

Bhojpuri

Banjar (Latin script)

Tibetan

Bosnian

Buginese

Bulgarian

Catalan

Cebuano

Czech

Chokwe

Central Kurdish

Fijian

Finnish

Fon

Scottish Gaelic

Irish

Galician

Guarani

Gujarati

Haitian Creole

Hausa

Hebrew

Hindi

Chhattisgarhi

Croatian

Hugarian

Armenian

Igobo

IIocano

Indonesian

Kinyarwanda

Kyrgyz

Kimbundu

Konga

Korean

Kurdish (Kurmanji)

Lao

Latvian (Standard)

Ligurian

Limburgish

Lingala

Lithuanian

Lombard

Latgalian

Luxembourgish

Luba-Kasai

Ganda

Dholuo

Mizo

200 languages translated by NLLB-200 model, 2x our previous model

Our final model has +44% BLEU performance improvement over the previous state-of-the-art model

75 languages previously unsupported by commercial translation systems

18 billion parallel sentences, 2.5x more training data than previous model M2M-100 model

Largest open-source machine translation model 54B, 5x number of parameters bigger than previous M2M-100 model

40,000 translation directions supported by a single model—more than 4x the capability of previous benchmark

The research advancements from NLLB supports more than 25 billion translations served every day on Facebook News Feed, Instagram, and our other platforms

200 languages translated by NLLB-200 model, 2x our previous model

Our final model has +44% BLEU performance improvement over the previous state-of-the-art model

75 languages previously unsupported by commercial translation systems

18 billion parallel sentences, 2.5x more training data than previous model M2M-100 model

Largest open-source machine translation model 54B, 5x number of parameters bigger than previous M2M-100 model

40,000 translation directions supported by a single model—more than 4x the capability of previous benchmark

The research advancements from NLLB supports more than 25 billion translations served every day on Facebook News Feed, Instagram, and our other platforms

Learn More

Let's take No Language Left Behind further, together.

There’s more to learn about NLLB, and even more to accomplish with it. Read our whitepaper and blog for details, and download the model to help us take this project further. While we’ve reached 200 languages, we’ve only just begun. Join us, and build with us, as we continue on this important journey of translation and inclusion.