COMPUTER VISION

ML APPLICATIONS

Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals

July 18, 2022

Abstract

A visual counterfactual explanation replaces image regions in a query image with regions from a distractor image such that the system's decision on the transformed image changes to the distractor class. In this work, we present a novel framework for computing visual counterfactual explanations based on two key ideas. First, we enforce that the replaced and replacer regions contain the same semantic part, resulting in more semantically consistent explanations. Second, we use multiple distractor images in a computationally efficient way and obtain more discriminative explanations with fewer region replacements. Our approach is 27% more semantically consistent and an order of magnitude faster than a competing method on three fine-grained image recognition datasets. We highlight the utility of our counterfactuals over existing works through machine teaching experiments where we teach humans to classify different bird species. We also complement our explanations with the vocabulary of parts and attributes that contributed the most to the system's decision. In this task as well, we obtain state-of-the-art results when using our counterfactual explanations relative to existing works, reinforcing the importance of semantically consistent explanations. Source code is available at https://github.com/facebookresearch/visual-counterfactuals.

Download the Paper

AUTHORS

Written by

Deepti Ghadiyaram

Dhruv Mahajan

Filip Radenovic

Simon Vandenhende

Publisher

ECCV

Research Topics

Computer Vision

Core Machine Learning

Related Publications

December 13, 2022

NLP

COMPUTER VISION

Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language

Michael Auli, Alexei Baevski, Arun Babu, Wei-Ning Hsu

December 13, 2022

November 28, 2022

RESEARCH

CORE MACHINE LEARNING

Neural Attentive Circuits

Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio

November 28, 2022

November 23, 2022

THEORY

CORE MACHINE LEARNING

Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy

Tal Hassner, Cuong N. Nguyen, Cuong V. Nguyen, Lam Si Tung Ho, Vu Dinh

November 23, 2022

November 16, 2022

RESEARCH

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.