Systems Research

Ranking & Recommendations

Understanding Training Efficiency of Deep Learning Recommendation Models at Scale

February 27, 2021

Abstract

The use of GPUs has proliferated for machine learning workflows and is now considered mainstream for many deep learning models. Meanwhile, when training state-of-the-art personal recommendation models, which consume the highest number of compute cycles at our large-scale data centers, the use of GPUs came with various challenges due to having both compute-intensive and memory-intensive components.

GPU performance and efficiency of these recommendation models are largely affected by model architecture configurations such as dense and sparse features, MLP dimensions. Furthermore, these models often contain large embedding tables that do not fit into limited GPU memory. The goal of this paper is to explain the intricacies of using GPUs for training recommendation models, factors affecting hardware efficiency at scale, and learnings from a new scale-up GPU server design, Zion.

Download the Paper

AUTHORS

Written by

Bilge Acun

Matthew Murphy

Xiaodong Wang

Jade Nie

Carole-Jean Wu

Kim Hazelwood

Publisher

High Performance Computer Architecture (HPCA 2021)

Research Topics

Ranking and Recommendations

Systems Research

Related Publications

August 08, 2022

Core Machine Learning

Opacus: User-Friendly Differential Privacy Library in PyTorch

Ashkan Yousefpour, Akash Bharadwaj, Alex Sablayrolles, Graham Cormode, Igor Shilov, Ilya Mironov, Jessica Zhao, John Nguyen, Karthik Prasad, Mani Malek, Sayan Ghosh

August 08, 2022

December 06, 2018

Systems Research

Rethinking floating point for deep learning

Jeff Johnson

December 06, 2018

June 22, 2015

Systems Research

NLP

Fast Convolutional Nets With fbfft: A GPU Performance Evaluation | Facebook AI Research

Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun

June 22, 2015

December 07, 2018

Systems Research

Rethinking floating point for deep learning | Facebook AI Research

Jeff Johnson

December 07, 2018

September 01, 2020

Systems Research

ResiliNet: Failure-Resilient Inference in Distributed Neural Networks

Ashkan Yousefpour, Brian Q. Nguyen, Siddartha Devic, Guanhua Wang, Aboudy Kreidieh, Hans Lobel, Alexandre M. Bayen, Jason P. Jue

September 01, 2020

March 02, 2020

Systems Research

Federated Optimization in Heterogenous Networks | Facebook AI Research

Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

March 02, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.