NLP

Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders

June 22, 2020

Abstract

A major obstacle in Word Sense Disambiguation (WSD) is that word senses are not uniformly distributed, causing existing models to generally perform poorly on senses that are either rare or unseen during training. We propose a bi-encoder model that independently embeds (1) the target word with its surrounding context and (2) the dictionary definition, or gloss, of each sense. The encoders are jointly optimized in the same representation space, so that sense disambiguation can be performed by finding the nearest sense embedding for each target word embedding. Our system out performs previous state-of-the-art models on English all-words WSD; these gains predominantly come from improved performance on rare senses, leading to a 31.1% error reduction on less frequent senses over prior work. This demonstrates that rare senses can be more effectively disambiguated by modeling their definitions.

Download the Paper

AUTHORS

Written by

Terra Blevins

Luke Zettlemoyer

Publisher

Association for Computational Linguistics (ACL)

Related Publications

June 03, 2019

NLP

FAIRSEQ: A Fast, Extensible Toolkit for Sequence Modeling | Facebook AI Research

FAIRSEQ is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports…

Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli

June 03, 2019

June 02, 2019

NLP

Cooperative Learning of Disjoint Syntax and Semantics | Facebook AI Research

There has been considerable attention devoted to models that learn to jointly infer an expression’s syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct…

Serhii Havrylov, Germán Kruszewski, Armand Joulin

June 02, 2019

October 30, 2018

NLP

Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion | Facebook AI Research

Continuous word representations learned separately on distinct languages can be aligned so that their words become comparable in a common space. Existing works typically solve a least-square regression problem to learn a rotation aligning a…

Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, Edouard Grave

October 30, 2018

October 31, 2018

NLP

Understanding Back-Translation at Scale | Facebook AI Research

An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and…

Sergey Edunov, Myle Ott, Michael Auli, David Grangier

October 31, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.