RESEARCH

NLP

Countering Language Drift via Visual Grounding

November 4, 2019

Abstract

Emergent multi-agent communication protocols are very different from natural language and not easily interpretable by humans. We find that agents that were initially pretrained to produce natural language can also experience detrimental language drift: when a nonlinguistic reward is used in a goal-based task, e.g. some scalar success metric, the communication protocol may easily and radically diverge from natural language. We recast translation as a multi-agent communication game and examine auxiliary training constraints for their effectiveness in mitigating language drift. We show that a combination of syntactic (language model likelihood) and semantic (visual grounding) constraints gives the best communication performance, allowing pre-trained agents to retain English syntax while learning to accurately convey the intended meaning.

Download the Paper

Related Publications

September 10, 2019

NLP

Bridging the Gap Between Relevance Matching and Semantic Matching for Short Text Similarity Modeling | Facebook AI Research

Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin

September 10, 2019

June 11, 2019

NLP

COMPUTER VISION

Adversarial Inference for Multi-Sentence Video Description | Facebook AI Research

Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach

June 11, 2019

May 17, 2019

NLP

Unsupervised Hyper-alignment for Multilingual Word Embeddings | Facebook AI Research

Jean Alaux, Edouard Grave, Marco Cuturi, Armand Joulin

May 17, 2019

July 27, 2019

NLP

Unsupervised Question Answering by Cloze Translation | Facebook AI Research

Patrick Lewis, Ludovic Denoyer, Sebastian Riedel

July 27, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.