Luke Zettlemoyer

Luke Zettlemoyer is a research manager and site lead for FAIR Seattle. He is also a Professor in the Allen School of Computer Science & Engineering at the University of Washington. His research is in empirical computational semantics, where the goal is to build models that recover representations of the meaning of natural language text. Recent work has focused on language modeling pretraining, multi-lingual NLP, semantic parsing, question answering, and information extraction. Luke's honors include a PECASE Award and being named an Allen Distinguished Investigator, along with more than ten paper awards at top NLP venues. He was a postdoctoral researcher at the University of Edinburgh and received his Ph.D. from MIT.

Luke's Publications

May 20, 2020

RESEARCH

NLP

Generalization through Memorization: Nearest Neighbor Language Models

We introduce kNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a k-nearest neighbors (kNN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and…

Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis

May 20, 2020

May 20, 2020

RESEARCH

NLP

Cloze-driven Pretraining of Self-attention Networks

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each…

Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli

May 20, 2020

May 20, 2020

RESEARCH

NLP

Mask-Predict: Parallel Decoding of Conditional Masked Language Models

Most machine translation systems generate text autoregressively from left to right. We, instead, use a masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a…

Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer

May 20, 2020