RESEARCH

The Early Phase of Neural Network Training

April 27, 2020

Abstract

Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a small subspace (Gur-Ari et al., 2018), and the network undergoes a critical period (Achille et al., 2019). Here we examine the changes that deep neural networks undergo during this early phase of training. We perform extensive measurements of the network state during these early iterations of training and leverage the framework of Frankle et al. (2019) to quantitatively probe the weight distribution and its reliance on various aspects of the dataset. We find that, within this framework, deep networks are not robust to reinitializing with random weights while maintaining signs, and that weight distributions are highly non-independent even after only a few hundred iterations. Despite this behavior, pre-training with blurred inputs or an auxiliary self-supervised task can approximate the changes in supervised networks, suggesting that these changes are not inherently label-dependent, though labels significantly accelerate this process. Together, these results help to elucidate the network changes occurring during this pivotal initial period of learning.

Download the Paper

AUTHORS

Written by

Jonathan Frankle

David J. Schwab

Ari Morcos

Recent Publications

February 01, 2021

RANKING & RECOMMENDATIONS

Anytime Inference with Distilled Hierarchical Neural Ensembles

Inference in deep neural networks can be computationally expensive, and networks capable of anytime inference are important in scenarios where the amount of compute or quantity of input data varies over time.…

Adria Ruiz, Jakob Verbeek

February 01, 2021

January 09, 2021

COMPUTER VISION

Tarsier: Evolving Noise Injection in Super-Resolution GANs

Super-resolution aims at increasing the resolution and level of detail within an image.…

Baptiste Roziere, Nathanaël Carraz Rakotonirina, Vlad Hosu, Andry Rasoanaivo, Hanhe Lin, Camille Couprie, Olivier Teytaud

January 09, 2021

January 01, 2021

Asynchronous Gradient-Push | Facebook AI Research

We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’…

Mahmoud Assran, Michael Rabbat

January 01, 2021

December 12, 2020

Fit The Right NP-Hard Problem: End-to-end Learning of Integer Programming Constraints

Bridging logical and algorithmic reasoning with modern machine learning techniques is a fundamental challenge with potentially transformative impact…

Anselm Paulus, Michal Rolinek, Vit Musil, Brandon Amos, Georg Martius

December 12, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.