RESEARCH

COMPUTER VISION

FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions

June 1, 2020

Abstract

Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks. However, DARTS-based DNAS’s search space is small when compared to other search methods’, since all candidate network layers must be explicitly instantiated in memory. To address this bottleneck, we propose a memory and computationally efficient DNAS variant: DMaskingNAS. This algorithm expands the search space by up to 1014 × over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. We propose a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands. Furthermore, we employ effective shape propagation to maximize per-FLOP or per-parameter accuracy. The searched FBNetV2s yield state-of-the-art performance when compared with all previous architectures. With up to 421× less search cost, DMaskingNAS finds models with 0.9% higher accuracy, 15% fewer FLOPs than MobileNetV3-Small; and with similar accuracy but 20% fewer FLOPs than Efficient-B0. Furthermore, our FBNetV2 outperforms MobileNetV3 by 2.6% in accuracy, with equivalent model size. FBNetV2 models are open-sourced at https://github.com/facebookresearch/mobile-vision.

Download the Paper

AUTHORS

Written by

Alvin Wan

Xiaoliang Dai

Peizhao Zhang

Zijian He

Yuandong Tian

Saining Xie

Bichen Wu

Matthew Yu

Tao Xu

Kan Chen

Peter Vajda

Joseph E. Gonzalez

Publisher

Conference on Computer Vision and Pattern Recognition (CVPR)

Research Topics

Computer Vision

Related Publications

June 15, 2019

COMPUTER VISION

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search | Facebook AI Research

Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture…

Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, Kurt Keutzer

June 15, 2019

April 28, 2019

COMPUTER VISION

Inverse Path Tracing for Joint Material and Lighting Estimation | Facebook AI Research

Modern computer vision algorithms have brought significant advancement to 3D geometry reconstruction. However, illumination and material reconstruction remain less studied, with current approaches assuming very simplified models for materials…

Dejan Azinović, Tzu-Mao Li, Anton Kaplanyan, Matthias Nießner

April 28, 2019

June 14, 2019

COMPUTER VISION

Thinking Outside the Pool: Active Training Image Creation for Relative Attributes | Facebook AI Research

Current wisdom suggests more labeled image data is always better, and obtaining labels is the bottleneck. Yet curating a pool of sufficiently diverse and informative images is itself a challenge. In particular, training image curation is…

Aron Yu, Kristen Grauman

June 14, 2019

September 09, 2018

COMPUTER VISION

DDRNet: Depth Map Denoising and Refinement for Consumer Depth Cameras Using Cascaded CNNs | Facebook AI Research

Consumer depth sensors are more and more popular and come to our daily lives marked by its recent integration in the latest iPhone X. However, they still suffer from heavy noises which dramatically limit their applications. Although plenty of…

Shi Yan, Chenglei Wu, Lizhen Wang, Feng Xu, Liang An, Kaiwen Guo, Yebin Liu

September 09, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.