Michael is a Research Scientist at Facebook AI Research in Menlo Park. During his Ph.D., he worked on CCG parsing at the University of Edinburgh. While at Microsoft Research, Michael was involved in the early work on neural machine translation and neural dialogue models. Following this, he led the team that developed convolutional sequence to sequence models. Michael currently works on semi-supervised and self-supervised learning applied to natural language processing and speech recognition.
July 28, 2019
We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions. The dataset comprises 270K threads from the Reddit forum “Explain Like I’m Five” (ELI5) where…
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli,
July 28, 2019
June 10, 2019
Mixture models trained via EM are among the simplest, most widely used and well understood latent variable models in the machine learning literature. Surprisingly, these models have been hardly explored in text generation applications such as…
Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato,
June 10, 2019
June 03, 2019
Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight…
Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, Michael Auli,
June 03, 2019