An Exploratory Study on Multilingual Quality Estimation

November 18, 2021

Abstract

Predicting the quality of machine translation has traditionally been addressed with language-specifc models, under the assumption that the quality label distribution or linguistic features exhibit traits that are not shared across languages. An obvious disadvantage of this approach is the need for labelled data for each given language pair. We challenge this assumption by exploring different approaches to multilingual Quality Estimation (QE), including using scores from translation models. We show that these outperform singlelanguage models, particularly in less balanced quality label distributions and low-resource settings. In the extreme case of zero-shot QE, we show that it is possible to accurately predict quality for any given new language from models trained on other languages. Our fndings indicate that state-of-the-art neural QE models based on powerful pre-trained representations generalise well across languages, making them more applicable in real-world settings.

Download the Paper

AUTHORS

Written by

Vishrav Chaudhary

Adi Renduchintala

Ahmed El-Kishky

Lucia Specia

Paco Guzmán

Shuo Sun

Fred Blain

Marina Fomicheva

Publisher

AACL

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.