Designing Network Design Spaces

June 14, 2020


In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5× faster on GPUs.

Download the Paper


Written by

Ilija Radosavovic

Raj Prateek Kosaraju

Ross Girshick

Kaiming He

Piotr Dollar


Conference on Computer Vision and Pattern Recognition (CVPR)

Research Topics

Computer Vision

Recent Publications

January 09, 2021


Tarsier: Evolving Noise Injection in Super-Resolution GANs

Super-resolution aims at increasing the resolution and level of detail within an image.…

Baptiste Roziere, Nathanaël Carraz Rakotonirina, Vlad Hosu, Andry Rasoanaivo, Hanhe Lin, Camille Couprie, Olivier Teytaud

January 09, 2021

December 10, 2020


Better Set Representations For Relational Reasoning

Incorporating relational reasoning into neural networks has greatly expanded their capabilities and scope.…

Qian Huang, Horace He, Abhay Singh, Yan Zhang, Ser-Nam Lim, Austin Benson

December 10, 2020

December 06, 2020


Combining Deep Reinforcement Learning and Search for Imperfect-Information Games

The combination of deep reinforcement learning and search at both training and test time is a powerful paradigm that has led to a number of successes in single-agent…

Noam Brown, Anton Bakhtin, Adam Lerer, Qucheng Gong

December 06, 2020

December 06, 2020


Riemannian Continuous Normalizing Flows

Normalizing flows have shown great promise for modelling flexible probability distributions in a computationally tractable way. However, whilst data is often naturally described on…

Emile Mathieu, Maximilian Nickel

December 06, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.