Bringing the world closer together by advancing AI

Deepfake Detection

Bringing the world closer together by advancing AI

DensePose

Bringing the world closer together by advancing AI

Detectron2

Bringing the world closer together by advancing AI

Deepfake Detection

Bringing the world closer together by advancing AI

DensePose

Bringing the world closer together by advancing AI

Detectron2

Latest News

RESEARCH

NLP

The evolution of deep learning and PyTorch

March 09, 2020

AR/VR

ML APPLICATIONS

Using integrated ML to deliver low-latency mobile VR graphics

March 05, 2020

COMPUTER VISION

ML APPLICATIONS

Powered by AI: Turning any 2D photo into 3D using convolutional neural nets

February 28, 2020

Open-Source AI Tools

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.

Open-Source AI Research

We're advancing the state-of-the-art in artificial intelligence through fundamental and applied research in open collaboration with the community.

Notable Papers

COMPUTER VISION

Live Face De-Identification in Video

Oran Gafni

Lior Wolf

Yaniv Taigman

International Conference on Computer Vision (ICCV)

RESEARCH

COMPUTER VISION

TensorMask: A Foundation for Dense Object Segmentation

Xinlei Chen

Ross Girshick

Kaiming He

Piotr Dollar

International Conference on Computer Vision (ICCV)

RESEARCH

Single-Network Whole-Body Pose Estimation

Gines Hidalgo

Yaadhav Raaj

Haroon Idrees

Donglai Xiang...

International Conference on Computer Vision (ICCV)

COMPUTER VISION

A Universal Music Translation Network

Noam Mor

Lior Wolf

Adam Polyak

Yaniv Taigman

International Conference on Learning Representations (ICLR)

Latest Publications

RESEARCH

The Early Phase of Neural Network Training

Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent…

Jonathan Frankle, David J. Schwab, Ari Morcos

RESEARCH

SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum

Distributed optimization is essential for training large models on large datasets. Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local…

Jianyu Wang, Vinayak Tantia, Nicolas Ballas, Michael Rabbat

RESEARCH

And the bit goes down: Revisiting the quantization of neural networks

In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather…

Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou

RESEARCH

Permutation Equivariant Models for Compositional Generalization in Language

Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of…

Jonathan Gordon, David Lopez-Paz, Marco Baroni, Diane Bouchacourt

Help Us Pioneer the Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.