RESEARCH

ML APPLICATIONS

Student Specialization in Deep Rectified Networks With Finite Width and Input Dimension

July 14, 2020

Abstract

We consider a deep ReLU / Leaky ReLU student network trained from the output of a fixed teacher network of the same depth, with Stochastic Gradient Descent (SGD). The student network is over-realized: at each layer l, the number nl of student nodes is more than that (ml) of teacher. Under mild conditions on dataset and teacher network, we prove that when the gradient is small at every data sample, each teacher node is specialized by at least one student node at the lowest layer. For two-layer network, such specialization can be achieved by training on any dataset of polynomial size O(K^{5/2}d^3ϵ^{−1}). until the gradient magnitude drops to O(ϵ/K^{3/2}d^{1/2}). Here d is the input dimension, K=m_1+n_1 is the total number of neurons in the lowest layer of teacher and student. Note that we require a specific form of data augmentation and the sample complexity includes the additional data generated from augmentation. To our best knowledge, we are the first to give polynomial sample complexity for student specialization of training two-layer (Leaky) ReLU networks with finite depth and width in teacher-student setting, and finite complexity for the lowest layer specialization in multi-layer case, without parametric assumption of the input (like Gaussian). Our theory suggests that teacher nodes with large fan-out weights get specialized first when the gradient is still large, while others are specialized with small gradient, which suggests inductive bias in training. This shapes the stage of training as empirically observed in multiple previous works. Experiments on synthetic and CIFAR10 verify our findings. The code is released in https://github.com/facebookresearch/luckmatters/

Download the Paper

AUTHORS

Written by

Yuandong Tian

Research Topics

Artificial Intelligence

Related Publications

June 02, 2019

Simple Attention-Based Representation Learning for Ranking Short Social Media Posts | Facebook AI Research

This paper explores the problem of ranking short social media posts with respect to user queries using neural networks. Instead of starting with a complex architecture, we proceed from the bottom up and examine the effectiveness of a simple,…

Peng Shi, Jinfeng Rao, Jimmy Lin

June 02, 2019

June 09, 2019

THEORY

First-order Adversarial Vulnerability of Neural Networks and Input Dimension | Facebook AI Research

Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients…

Carl-Johann Simon-Gabriel, Yann Ollivier, Bernhard Scholkopf, Leon Bottou, David Lopez-Paz

June 09, 2019

May 31, 2019

INTEGRITY

Abusive Language Detection with Graph Convolutional Networks | Facebook AI Research

Abuse on the Internet represents a significant societal problem of our time. Previous research on automated abusive language detection in Twitter has shown that community-based profiling of users is a promising technique for this task. However,…

Pushkar Mishra, Marco Del Tredici, Helen Yannakoudakis, Ekaterina Shutova

May 31, 2019

June 01, 2019

Probabilistic Planning with Reduced Models | Facebook AI Research

Reduced models are simplified versions of a given domain, designed to accelerate the planning process. Interest in reduced models has grown since the surprising success of determinization in the first international probabilistic planning…

Luis Pineda, Shlomo Zilberstein

June 01, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.