Yinhan Liu

Yinhan works with Facebook AI Research (FAIR) on NLP tasks that include pretraining and neural machine translation. She earned her M.S. in operations research from the University of Texas at Austin.

Yinhan's Publications

November 02, 2019

RESEARCH

NLP

Mask-Predict: Parallel Decoding of Conditional Masked Language Models

Most machine translation systems generate text autoregressively from left to right. We, instead, use a masked language modeling objective to train a model to predict any subset of the target words, conditioned on both the input text and a…

Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer,

November 02, 2019

November 07, 2019

RESEARCH

NLP

Cloze-driven Pretraining of Self-attention Networks

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each…

Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli,

November 07, 2019