PyTorch


PyTorch is an open source deep learning framework built to be flexible and modular for research, with the stability and support needed for production deployment. It enables fast, flexible experimentation through a tape-based autograd system designed for immediate and python-like execution. With the release of PyTorch 1.0, the framework will also offer graph-based execution, a hybrid front-end allowing seamless switching between modes, distributed training, as well as efficient and performant mobile deployment.

Dynamic neural networks

While static graphs are great for production deployment, the research process involved in developing the next great algorithm is truly dynamic. PyTorch uses a technique called reverse-mode auto-differentiation, which allows developers to modify network behavior arbitrarily with zero lag or overhead, speeding up research iterations.


The best of both worlds

Bringing together elements of flexibility, stability, and scalability, the next release of PyTorch will include a unique hybrid front end. This means AI / ML researchers and developers no longer need to make compromises when deciding which tools to use. With PyTorch's hybrid front end, developers can seamlessly switch between imperative, define-by-run execution and graph mode, boosting productivity and bridging the gap between research and production.


Tape based autograd

Tape based autograd

Get Started

1


  • Install PyTorch. Multiple installation options are supported, including from source, pip, conda, and pre-built cloud services like AWS.

  • 2


  • Review documentation and tutorials to familiarize yourself with PyTorch's tensor library and neural networks.

  • 3


  • Check out tools, libraries, pre-trained models, and datasets to support your development needs.

  • 4

  • Build, train, and evaluate your neural network. Here's an example of code used to define a simple network:

  • 1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    import torch
    from torch.autograd import Variable
    import torch.nn as nn
    import torch.nn.functional as F
    
    
    class Net(nn.Module):
    
        def __init__(self):
            super(Net, self).__init__()
            # 1 input image channel, 6 output channels, 5x5 square convolution
            # kernel
            self.conv1 = nn.Conv2d(1, 6, 5)
            self.conv2 = nn.Conv2d(6, 16, 5)
            # an affine operation: y = Wx + b
            self.fc1 = nn.Linear(16 * 5 * 5, 120)
            self.fc2 = nn.Linear(120, 84)
            self.fc3 = nn.Linear(84, 10)
    
        def forward(self, x):
            # Max pooling over a (2, 2) window
            x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
            # If the size is a square you can only specify a single number
            x = F.max_pool2d(F.relu(self.conv2(x)), 2)
            x = x.view(-1, self.num_flat_features(x))
            x = F.relu(self.fc1(x))
            x = F.relu(self.fc2(x))
            x = self.fc3(x)
            return x
    
        def num_flat_features(self, x):
            size = x.size()[1:]  # all dimensions except the batch dimension
            num_features = 1
            for s in size:
                num_features *= s
            return num_features
    
    
    net = Net()
    print(net)
        

    More Tools

    Join Us

    Tackle the world's most complex technology challenges

    Join Our Team

    Latest News

    Visit the AI Blog for updates on recent publications, new tools, and more.

    Visit Blog