Link Search Menu Expand Document

PyTorch

What is PyTorch?

Python developers utilized a library named PyTorch in working with deep learning. It is an open-source library used for machine learning developed based on the Torch library of Python. The developers commonly use it for the development of various applications involving computer vision and natural language processing. Moreover, they use PyTorch Geometric for performing deep learning on the input data, which is irregular.

This data includes graphs, point clouds, and manifolds. FAIR (Facebook’s AI Research Lab) primarily developed PyTorch. They developed this free and open-source software under the BSD license and made its initial release five years ago. Additionally, the main languages of the PyTorch library are Python, C++, and CUDA.

Key Features of PyTorch

PyTorch is a machine learning framework that helps in the process of research prototyping to production deployment. It has many features and capabilities, which makes it popular with modern developers. Some of the features of PyTorch include a straightforward and smooth API, python integrations along with data science stack, building computational graphs in more straightforward ways, having a cross front-end, and many more. Some of the critical features of PyTorch are following:

1. Production Ready

PyTorch provides a framework with the optimal transition between eager and graph modes with the help of TorchScript. It provides a friendly and flexible, eager mode for the developers to work. Moreover, it provides smooth transformation in graph mode to achieve speed, optimization, and C++ runtime environment functionality. Below is a peak to working with TorchScript:

import torch
class MyModule(torch.nn.Module)
            def  __init__(self, N, M):
            super(MyModule, self).__init__()
            self.weight=torch.nn.Parameter(torch.rand(N,M)
            def forward(self, input)
            if input.sum()>0:
            output=self.weight.mv(input)
            else:
            output=self.weight+input
            return output
my_script_module=torch.jit.script(MyModule(3, 4))
my_script_module.save(“my_script_mpdule.pt”)

Similarly, the developers use TorchServe to accelerate the path to production. It is an easier tool used for the deployment of PyTorch models at scale. This tool helps in deployment irrespective of the cloud and environment features and provides functionalities such as logging metrics, multi-model serving, and creating RESTful endpoints for integrating applications. A sample of converting the model from PyTorch to TorchServe is following:

torch-model-archiver –model-name densenet161 \
--version 1.0 –model-file serve/examples/image_classifier/densenet
-161/model.py \
--serialized-file densenet161-8d451a50.pth \
--extra-files serve/examples/image_classifier/index_to_name.json \
--handler image_classifier
torchserve –start –model-store model_store –models densenet161.maz

2. Distributed Training

PyTorch provides the opportunity of scalable distributed training along with optimized performance in terms of research and production with the help of a stable torch.distributed backend. This optimization in both research and production performance is due to the native support system for asynchronous execution of the operations and the peer-to-peer communication process enabled by Python and C++. Below is an example of the torch.distributed:

import torch.distributed as dt
from torch.nn.parallel import DistributedDataParallel
dist.init_process_group(backend=’gloo’)
model=DistributedDataParallel(model)

3. Robust Ecosystem

PyTorch is being supported by a large community of researchers and developers. These developers have been working on creating numerous tools and libraries to provide a wide ecosystem for PyTorch. Therefore, PyTorch can support development in the areas of computer vision for augmentation learning. Moreover, there is also support for Natural Language Processing (NLP) and more frameworks. An example of the robust ecosystem is following:

import torchvision.models as models
resnet18=models.resnet18(pretrained=True)
alexnet=models.alexnet(pretrained=True)
aqueezenet=models.squeezenet1_0(pretrained=True)
vgg16=models.vgg16(pretrained=True)
desenet=models.desenet161(pretrained=True)
inception=models.inception_v3(pretrained=True)

Moreover, there are also features like providing cloud support, native ONNX support with a C++ front end, and deploying on IOS and Android. PyTorch has a wide range of tools and libraries, including Captum, PyTorch Geometric, scorch, Opacus, DGL, Flair, PySyft, and ClinicaDL VISSL, MONAI, FairScale, and many more.

PyTorch makes neural network design accessible to developers. There is also a feature to change the graphs even at the runtime through dynamic graphing. Moreover, Various chief cloud platforms support PyTorch, which provides a flexible development process for developers.

Other useful articles:


Back to top

© , Learn Python 101 — All Rights Reserved - Terms of Use - Privacy Policy