December 02, 2019
Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.
Written by
Soumith Chintala
Adam Lerer
Benoit Steiner
Edward Yang
Francisco Massa
Gregory Chanan
Junjie Bai
Lu Fang
Sam Gross
Zachary DeVito
Zeming Lin
Adam Paszke
Alban Desmaison
Alykhan Tejani
Andreas Köpf
James Bradbury
Luca Antiga
Martin Raison
Natalia Gimelshein
Sasank Chilamkurthy
Trevor Killeen
Publisher
NeurIPS
Research Topics
March 20, 2024
Armen Avetisyan, Chris Xie, Henry Howard-Jenkins, Tsun-Yi Yang, Samir Aroudj, Suvam Patra, Fuyang Zhang, Duncan Frost, Luke Holland, Campbell Orme, Jakob Julian Engel, Edward Miller, Richard Newcombe, Vasileios Balntas
March 20, 2024
February 13, 2024
Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni, Filippos Kokkinos
February 13, 2024
January 25, 2024
Felix Xu, Di Lin, Jianjun Zhao, Jianlang Chen, Lei Ma, Qing Guo, Wei Feng, Xuhong Ren
January 25, 2024
December 08, 2023
Sherry Xue, Kristen Grauman
December 08, 2023
Who We Are
Our Actions
Newsletter