Sainbayar Sukhbaatar

Before joining FAIR as a Research Scientist, Sainbayar completed his Ph.D. at NYU. Prior to that, he received his BEng and M.S. at the University of Tokyo. His main research interest is deep learning algorithms with reasoning and memory capabilities.

Sainbayar's Work

Adaptive Attention Span in Transformers

Augmenting self-attention with persistent memory

End-To-End Memory Networks

Sainbayar's Publications

July 27, 2019

RESEARCH

Adaptive Attention Span in Transformers

We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational…

Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin,

July 27, 2019

July 09, 2018

RESEARCH

ML APPLICATIONS

Composable Planning with Attributes

The tasks that an agent will need to solve often are not known during training. However, if the agent knows which properties of the environment are important then, after learning how its actions affect those properties, it may be able to use…

Amy Zhang, Adam Lerer, Sainbayar Sukhbaatar, Rob Fergus, Arthur Szlam,

July 09, 2018

April 30, 2018

RESEARCH

ML APPLICATIONS

Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play

We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then…

Sainbayar Sukhbaatar, Zeming Lin, Ilya Kostrikov, Gabriel Synnaeve, Arthur Szlam, Rob Fergus,

April 30, 2018