RESEARCH

COMPUTER VISION

Adversarial Continual Learning

July 17, 2020

Abstract

Continual learning aims to learn new tasks without forgetting previously learned ones. We hypothesize that representations learned to solve each task in a sequence have a shared structure while containing some task-specific properties. We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks. Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills. We demonstrate our hybrid approach is effective in avoiding forgetting and show it is superior to both architecture-based and memory-based approaches on class incrementally learning of a single dataset as well as a sequence of multiple datasets in image classification. Our code is available at https://github.com/facebookresearch/Adversarial-Continual-Learning

Download the Paper

AUTHORS

Written by

Marcus Rohrbach

Franziska Meier

Roberto Calandra

Sayna Ebrahimi

Trevor Darrell

Publisher

ECCV

Research Topics

Computer Vision

Related Publications

May 09, 2023

COMPUTER VISION

ImageBind: One Embedding Space To Bind Them All

Rohit Girdhar, Alaa El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

May 09, 2023

April 05, 2023

COMPUTER VISION

Segment Anything

Alexander Kirillov, Alex Berg, Chloe Rolland, Eric Mintun, Hanzi Mao, Laura Gustafson, Nikhila Ravi, Piotr Dollar, Ross Girshick, Spencer Whitehead, Wan-Yen Lo

April 05, 2023

March 09, 2023

COMPUTER VISION

The Casual Conversations v2 Dataset

Bilal Porgali, Vítor Albiero, Jordan Ryda, Cristian Canton Ferrer, Caner Hazirbas

March 09, 2023

February 21, 2023

COMPUTER VISION

CORE MACHINE LEARNING

ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural Networks

Felix Xu, Fuyuan Zhang, Hua Qi, Jianjun Zhao, Jianlang Chen, Lei Ma, Qing Guo, Zhijie Wang

February 21, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.