RESEARCH

NLP

Mixture Models for Diverse Machine Translation: Tricks of the Trade

June 08, 2019

Abstract

Mixture models trained via EM are among the simplest, most widely used and well understood latent variable models in the machine learning literature. Surprisingly, these models have been hardly explored in text generation applications such as machine translation. In principle, they provide a latent variable to control generation and produce a diverse set of hypotheses. In practice, however, mixture models are prone to degeneracies - often only one component gets trained or the latent variable is simply ignored. We find that disabling dropout noise in responsibility computation is critical to successful training. In addition, the design choices of parameterization, prior distribution, hard versus soft EM and online versus offline assignment can dramatically affect model performance. We develop an evaluation protocol to assess both quality and diversity of generations against multiple references, and provide an extensive empirical study of several mixture model variants. Our analysis shows that certain types of mixture models are more robust and offer the best trade-off between translation quality and diversity compared to variational models and diverse decoding approaches.

Download the Paper

AUTHORS

Written by

Marc'Aurelio Ranzato

Michael Auli

Myle Ott

Tianxiao Shen

Publisher

ICML

Related Publications

April 22, 2024

NLP

Text Quality-Based Pruning for Efficient Training of Language Models

Vasu Sharma *, Karthik Padthe *, Newsha Ardalani, Kushal Tirumala, Russ Howes, Hu Xu, Bernie Huang, Daniel Li (FAIR), Armen Aghajanyan, Gargi Ghosh, Luke Zettlemoyer

April 22, 2024

April 14, 2024

SPEECH & AUDIO

NLP

CoLLD: Contrastive Layer-to-Layer Distillation for Compressing Multilingual Pre-Trained Speech Encoders

Heng-Jui Chang, Ning Dong (AI), Ruslan Mavlyutov, Sravya Popuri, Andy Chung

April 14, 2024

April 05, 2024

CONVERSATIONAL AI

NLP

MART: Improving LLM Safety with Multi-round Automatic Red-Teaming

Suyu Ge, Chunting Zhou, Rui Hou, Madian Khabsa, Yi-Chia Wang, Qifan Wang, Jiawei Han, Yuning Mao

April 05, 2024

February 21, 2024

INTEGRITY

NLP

Watermarking Makes Language Models Radioactive

Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon

February 21, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.