ONNX is an open format for representing deep learning models, allowing AI developers to easily move models between state-of-the-art tools and choose the best combination. ONNX accelerates the process from research to production by enabling interoperability across popular tools including PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet, and more.
ONNX enables models to be trained in one framework, and then exported and deployed into other frameworks for inference. ONNX models are currently supported in frameworks such as PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet and Chainer with additional support for Core ML, TensorFlow, Qualcomm SNPE, Nvidia's TensorRT and Intel's nGraph.
Any tool that exports ONNX models can benefit from ONNX-compatible runtimes and libraries designed to maximize performance. ONNX currently supports Qualcomm SNPE, AMD, ARM, Intel and other hardware partners.
Install ONNX from binaries using pip or conda, or build from source.
Follow the importing and exporting directions for the frameworks you're using to get started.
Explore and try out the community's models in the ONNX model zoo.
PyTorch is an open source deep learning framework built to be flexible and modular for research, with the stability and support needed for production deployment. It enables fast, flexible experimentation through a tape-based autograd system designed for immediate and python-like execution.