COMPUTER VISION

CORE MACHINE LEARNING

Debugging the Internals of Convolutional Networks

December 06, 2021

Abstract

The filters learned by Convolutional Neural Networks (CNNs) and the feature maps these filters compute are sensitive to convolution arithmetic. Several architectural choices that dictate this arithmetic can result in feature-map artifacts. These artifacts can interfere with the downstream task and impact the accuracy and robustness. We provide a number of visual-debugging means to surface feature-map artifacts and to analyze how they emerge in CNNs. Our means help analyze the impact of these artifacts on the weights learned by the model. Guided by our analysis, model developers can make informed architectural choices that can verifiably mitigate harmful artifacts and improve the model’s accuracy and its shift robustness.

Download the Paper

AUTHORS

Written by

Bilal Alsallakh

Narine Kokhlikyan

Vivek Miglani

Shubham Muttepawar

Edward Wang (AI Infra)

Sara Zhang

David Adkins

Orion Reblitz-Richardson

Publisher

NeurIPS Workshop

Research Topics

Computer Vision

Core Machine Learning

Related Publications

December 06, 2021

CORE MACHINE LEARNING

Revisiting Graph Neural Networks for Link Prediction

Yinglong Xia

December 06, 2021

December 06, 2021

COMPUTER VISION

Early Convolutions Help Transformers See Better

Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollar, Ross Girshick

December 06, 2021

December 06, 2021

INTEGRITY

CORE MACHINE LEARNING

BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining

Weizhe Hua, Yichi Zhang, Chuan Guo, Zhiru Zhang, Edward Suh

December 06, 2021

December 06, 2021

THEORY

CORE MACHINE LEARNING

Learning on Random Balls is Sufficient for Estimating (Some) Graph Parameters

Takanori Maehara, Hoang NT

December 06, 2021