ONNX is an open format for representing deep learning models, allowing AI developers to easily move models between state-of-the-art tools and choose the best combination. ONNX accelerates the process from research to production by enabling interoperability across popular tools including PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet, and more.

Framework interoperability

ONNX enables models to be trained in one framework, and then exported and deployed into other frameworks for inference. ONNX models are currently supported in frameworks such as PyTorch, Caffe2, Microsoft Cognitive Toolkit, Apache MXNet and Chainer with additional support for Core ML, TensorFlow, Qualcomm SNPE, Nvidia's TensorRT and Intel's nGraph.

Hardware optimizations

Any tool that exports ONNX models can benefit from ONNX-compatible runtimes and libraries designed to maximize performance. ONNX currently supports Qualcomm SNPE, AMD, ARM, Intel and other hardware partners.

Creating a more open AI ecosystem with ONNX

Get Started


  • Install ONNX from binaries using pip or conda, or build from source.

  • 2

  • Review documentation and tutorials to familiarize yourself with ONNX's functionality and advanced features.

  • 3

  • Follow the importing and exporting directions for the frameworks you're using to get started.

  • 4

  • Explore and try out the community's models in the ONNX model zoo.

  • More Tools


    PyTorch is an open source deep learning framework built to be flexible and modular for research, with the stability and support needed for production deployment. It enables fast, flexible experimentation through a tape-based autograd system designed for immediate and python-like execution.

    Join Us

    Tackle the world's most complex technology challenges

    Join Our Team

    Latest News

    Visit the AI Blog for updates on recent publications, new tools, and more.

    Visit Blog