Computer Vision

Core Machine Learning

IIRC: Incremental Implicitly-Refined Classification

June 16, 2021

Abstract

We introduce the “Incremental Implicitly-Refined Classification (IIRC)” setup, an extension to the class incremental learning setup where the incoming batches of classes have two granularity levels. i.e., each sample could have a high-level (coarse) label like “bear” and a low-level (fine) label like “polar bear.” Only one label is provided at a time, and the model has to figure out the other label if it has already learned it. This setup is more aligned with real-life scenarios, where a learner usually interacts with the same family of entities multiple times, discovers more granularity about them, while still trying not to forget previous knowledge. Moreover, this setup enables evaluating models for some important lifelong learning challenges that cannot be easily addressed under the existing setups. These challenges can be motivated by the example ”if a model was trained on the class bear in one task and on polar bear in another task, will it forget the concept of bear, will it rightfully infer that a polar bear is still a bear? and will it wrongfully associate the label of polar bear to other breeds of bear?”. We develop a standardized benchmark that enables evaluating models on the IIRC setup. We evaluate several state-of-the-art lifelong learning algorithms and highlight their strengths and limitations. For example, distillation-based methods perform relatively well but are prone to incorrectly predicting too many labels per image. We hope that the proposed setup, along with the benchmark, would provide a meaningful problem setting to the practitioners.

Download the Paper

AUTHORS

Written by

Mohamed Abdelsalam

Mojtaba Faramarzi

Shagun Sodhani

Sarath Chandar

Publisher

CVPR 2021

Research Topics

Core Machine Learning

Computer Vision

Related Publications

November 27, 2022

Core Machine Learning

Neural Attentive Circuits

Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio

November 27, 2022

November 16, 2022

NLP

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer

November 16, 2022

November 08, 2022

Theory

Beyond neural scaling laws: beating power law scaling via data pruning

Ari Morcos, Shashank Shekhar, Surya Ganguli, Ben Sorscher, Robert Geirhos

November 08, 2022

August 08, 2022

Core Machine Learning

Opacus: User-Friendly Differential Privacy Library in PyTorch

Ashkan Yousefpour, Akash Bharadwaj, Alex Sablayrolles, Graham Cormode, Igor Shilov, Ilya Mironov, Jessica Zhao, John Nguyen, Karthik Prasad, Mani Malek, Sayan Ghosh

August 08, 2022

December 07, 2020

Core Machine Learning

Adversarial Example Games

Avishek Joey Bose, Gauthier Gidel, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton

December 07, 2020

November 03, 2020

Core Machine Learning

Robust Embedded Deep K-means Clustering

Rui Zhang, Hanghang Tong Yinglong Xia, Yada Zhu

November 03, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.