Open-sourcing Captum: A model interpretability library for PyTorch

October 10, 2019

Written byNarine Kokhlikyan, Vivek Miglani, Edward Wang, Orion Reblitz-Richardson

Written by

Narine Kokhlikyan, Vivek Miglani, Edward Wang, Orion Reblitz-Richardson

Share

What it is:

Captum is a powerful, flexible, and easy-to-use model interpretability library for PyTorch. It makes state-of-the-art algorithms for interpretability readily accessible to the entire PyTorch community, so researchers and developers can better understand which features, neurons, and layers are contributing to a model’s predictions. Captum supports model interpretability across modalities such as vision and text, and its extensible design allows researchers to add new algorithms. Captum also allows researchers to quickly benchmark their work against other existing algorithms available in the library.

For model developers, Captum can be used to improve and troubleshoot models by facilitating the identification of different features that contribute to a model’s output in order to improve their design and troubleshoot unexpected outputs. We are also sharing an early release of Captum Insights, an interpretability visualization widget built on top of Captum. Captum Insights works across images, text, and other features to help users understand feature attribution. Captum Insight supports Integrated Gradients now, and we will be expanding to support other algorithms in future releases. More information is available at captum.ai

What it does:

Captum implements state-of-the-art interpretability algorithms, including Integrated Gradients, DeepLIFT, and Conductance. Developers can use Captum to understand feature importance or perform a deep dive on neural networks to understand neuron and layer attributions.

Why it matters:

Machine learning is used today across a wide range of industries that affect billions of people’s lives. Models are also becoming more complex as sophisticated new techniques are put into production. It’s important for ML developers to understand why their models produce the results they do and to be able to explain these results clearly to others.

Model interpretability libraries such as Captum help engineers create more reliable, predictable, and better-performing AI systems. They can inform decision-making about how those systems are used and build trust with others. In addition, as the number of multimodal models increases, the ability for interpretability libraries and visualizations to work seamlessly across such modalities will be crucial.

Captum brings these interpretability capabilities seamlessly to the PyTorch ecosystem to facilitate better models and model research.

Get it on GitHub

https://github.com/pytorch/captum

Written by

Narine Kokhlikyan

Research Scientist

Vivek Miglani

Software Engineer

Edward Wang

Software Engineer

Orion Reblitz-Richardson

Engineering Manager