Michael Auli

Michael is a Research Scientist at Facebook AI Research in Menlo Park. During his Ph.D., he worked on CCG parsing at the University of Edinburgh. While at Microsoft Research, Michael was involved in the early work on neural machine translation and neural dialogue models. Following this, he led the team that developed convolutional sequence to sequence models. Michael currently works on semi-supervised and self-supervised learning applied to natural language processing and speech recognition.

Michael's Publications



ELI5: Long Form Question Answering

We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions. The dataset comprises 270K threads from the Reddit forum “Explain Like I’m Five” (ELI5) where…

Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli,



Mixture Models for Diverse Machine Translation: Tricks of the Trade

Mixture models trained via EM are among the simplest, most widely used and well understood latent variable models in the machine learning literature. Surprisingly, these models have been hardly explored in text generation applications such as…

Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato,



Pay less attention with Lightweight and Dynamic Convolutions

Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight…

Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, Michael Auli,