Adina is a Research Scientist at Facebook AI Research in NYC (started October 2018). Previously, she earned her PhD at New York University in the Department of Linguistics, where she investigated the brain basis of syntactic and semantic processing. Her main research goal is to strengthen the connections between linguistics and cognitive science on the one hand and natural language processing and artificial intelligence on the other. She approaches this process from both directions: she brings linguistic and cognitive scientific insights about human language to bear on training, evaluating, and debiasing NLP systems, and also applies statistical methods and corpus analytic tools from NLP to uncover new quantitative, cross-linguistic facts about particular human languages.
July 06, 2020
The success of neural networks on a diverse set of NLP tasks has led researchers to question how much these networks actually know about natural language. Probes are a…
Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, Ryan Cotterell
July 06, 2020
June 19, 2020
The noun lexica of many natural languages are divided into several declension classes with characteristic morphological properties. Class membership is far from deterministic…
Adina Williams, Tiago Pimentel, Arya D. McCarthy, Hagen Blix, Eleanor Chodroff, Ryan Cotterell
June 19, 2020
June 19, 2020
Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models…
Paloma Jeretič, Alex Warstadt, Suvrat Bhooshan, Adina Williams
June 19, 2020
June 05, 2020
We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this…
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, Douwe Kiela
June 05, 2020
July 31, 2020
Being engaging, knowledgeable, and empathetic are all desirable general qualities in a conversational agent. Previous work has introduced tasks and datasets that aim to help…
Adina Williams, Ryan Cotterell, Lawrence Wolf-Sonkin, Damián Blasi, Hanna Wallach
July 31, 2020
June 19, 2020
Measuring what linguistic information is encoded in neural models of language has become popular in NLP. Researchers approach this enterprise by training “probes” - supervised models designed to extract linguistic structure…
Rowan Hall Maudsley, Joseph Valvoda, Tiago Pimentel, Adina Williams, Ryan Cotterell
June 19, 2020
November 04, 2019
Many of the world’s languages employ grammatical gender on the lexeme. For example, in Spanish, the word for house (casa) is feminine, whereas the word for paper (papel) is masculine. To a speaker of a genderless language, this assignment seems…
Adina Williams, Ryan Cotterell, Lawrence Wolf-Sonkin, Damián E. Blasi, Hanna Wallach,
November 04, 2019
June 16, 2019
While idiosyncrasies of the Chinese classifier system have been a richly studied topic among linguists (Adams and Conklin, 1973; Erbaugh, 1986; Lakoff, 1986), not much work has been done to quantify them with statistical methods. In this paper,…
Shijia Liu, Hongyuan Mei, Adina Williams, Ryan Cotterell,
June 16, 2019
October 29, 2018
State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used…
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, Ves Stoyanov,
October 29, 2018