RESEARCH

NLP

Predicting Declension Class from Form and Meaning

June 19, 2020

Abstract

The noun lexica of many natural languages are divided into several declension classes with characteristic morphological properties. Class membership is far from deterministic, but the phonological form of a noun and/or its meaning can often provide imperfect clues. Here, we investigate the strength of those clues. More specifically, we operationalize this by measuring how much information, in bits, we can glean about declension class from knowing the form and/or meaning of nouns. We know that form and meaning are often also indicative of grammatical gender---which, as we quantitatively verify, can itself share information with declension class---so we also control for gender. We find for two Indo-European languages (Czech and German) that form and meaning respectively share significant amounts of information with class (and contribute additional information above and beyond gender). The three-way interaction between class, form, and meaning (given gender) is also significant. Our study is important for two reasons: First, we introduce a new method that provides additional quantitative support for a classic linguistic finding that form and meaning are relevant for the classification of nouns into declensions. Secondly, we show not only that individual declensions classes vary in the strength of their clues within a language, but also that these variations themselves vary across languages.

Download the Paper

AUTHORS

Written by

Adina Williams

Arya McCarthy

Eleanor Chodroff

Hagen Blix

Ryan Cotterell

Tiago Pimentel

Publisher

ACL

Related Publications

December 15, 2021

RESEARCH

Sample-and-threshold differential privacy: Histograms and applications

Akash Bharadwaj, Graham Cormode

December 15, 2021

December 06, 2021

NLP

Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling

Hongyu Gong, Yun Tang, Juan Miguel Pino, Xian Li

December 06, 2021

November 16, 2021

NLP

Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN

Rahma Chaabouni, Roberto Dessì, Evgeny Kharitonov

November 16, 2021

November 08, 2021

NLP

CORE MACHINE LEARNING

DOBF: A Deobfuscation Pre-Training Objective for Programming Languages

Baptiste Rozière, Marie-Anne Lachaux, Marc Szafraniec, Guillaume Lample

November 08, 2021