RESEARCH

NLP

Word-order biases in deep-agent emergent communication

July 12, 2019

Abstract

Sequence-processing neural networks led to remarkable progress on many NLP tasks. As a consequence, there has been increasing interest in understanding to what extent they process language as humans do. We aim here to uncover which biases such models display with respect to "natural" word-order constraints. We train models to communicate about paths in a simple gridworld, using miniature languages that reflect or violate various natural language trends, such as the tendency to avoid redundancy or to minimize long-distance dependencies. We study how the controlled characteristics of our miniature languages affect individual learning and their stability across multiple network generations. The results draw a mixed picture. On the one hand, neural networks show a strong tendency to avoid long-distance dependencies. On the other hand, there is no clear preference for the efficient, non-redundant encoding of information that is widely attested in natural language. We thus suggest inoculating a notion of "effort" into neural networks, as a possible way to make their linguistic behavior more human-like.

Download the Paper

AUTHORS

Written by

Rahma Chaabouni

Alessandro Lazaric

Emmanuel Dupoux

Evgeny Kharitonov

Marco Baroni

Publisher

ACL

Related Publications

May 22, 2023

NLP

Scaling Speech Technology to 1,000+ Languages

Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, Michael Auli

May 22, 2023

February 24, 2023

NLP

LLaMA: Open and Efficient Foundation Language Models

Faisal Azhar, Hugo Touvron, Armand Joulin, Aurelien Rodriguez, Baptiste Rozière, Eric Hambro, Gautier Izacard, Guillaume Lample, Marie-Anne Lachaux, Naman Goyal, Thibaut Lavril, Timothee Lacroix, Xavier Martinet, Edouard Grave

February 24, 2023

February 20, 2023

INTEGRITY

NLP

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction

Maziar Sanjabi, Aaron Chan, Hamed Firooz, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren

February 20, 2023

December 31, 2022

NLP

Textless Speech Emotion Conversion using Discrete & Decomposed Representations

Yossef Mordechay Adi, Abdelrahman Mohamed, Adam Polyak, Emmanuel Dupoux, Evgeny Kharitonov, Jade Copet, Morgane Rivière, Tu Anh Nguyen, Wei-Ning Hsu, Felix Kreuk

December 31, 2022

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.