Systems Research

We focus on extending the frontiers of machine learning and AI by developing novel algorithmic, software and hardware techniques.

Our areas of research include computer languages, compilers, low-level optimization, distributed and parallel computing, computer arithmetic, HPC, GPU/FPGA/ASIC hardware applications, and HW/SW co-design.

Latest Publications


Rethinking floating point for deep learning

We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply/linear add, Kulisch accumulation and tapered encodings from Gustafson's posit format.

Jeff Johnson


Fast Convolutional Nets With fbfft: A GPU Performance Evaluation

We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units.

Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun