Research

Systems Research

ResiliNet: Failure-Resilient Inference in Distributed Neural Networks

September 1, 2020

Abstract

Techniques such as Federated Learning and Split Learning aim to train distributed deep learning models without sharing private data. In Split Learning, when a neural network is partitioned and distributed across physical nodes, failure of physical nodes causes the failure of the neural units that are placed on those nodes, which results in a significant performance drop. Current approaches focus on resiliency of training in distributed neural networks. However, resiliency of inference in distributed neural networks is less explored. We introduce ResiliNet, a scheme for making inference in distributed neural networks resilient to physical node failures. ResiliNet combines two concepts to provide resiliency: skip hyperconnection, a concept for skipping nodes in distributed neural networks similar to skip connection in resnets, and a novel technique called failout, which is introduced in this paper. Failout simulates physical node failure conditions during training using dropout, and is specifically designed to improve the resiliency of distributed neural networks. The results of the experiments and ablation studies using three datasets confirm the ability of ResiliNet to provide inference resiliency for distributed neural networks.

Download the Paper

AUTHORS

Written by

Ashkan Yousefpour

Brian Q. Nguyen

Siddartha Devic

Guanhua Wang

Aboudy Kreidieh

Hans Lobel

Alexandre M. Bayen

Jason P. Jue

Publisher

International Workshop on Federated Learning for User Privacy and Data Confidentiality (FL-ICML)

Research Topics

Systems Research

Related Publications

August 08, 2022

Core Machine Learning

Opacus: User-Friendly Differential Privacy Library in PyTorch

Ashkan Yousefpour, Akash Bharadwaj, Alex Sablayrolles, Graham Cormode, Igor Shilov, Ilya Mironov, Jessica Zhao, John Nguyen, Karthik Prasad, Mani Malek, Sayan Ghosh

August 08, 2022

December 06, 2018

Systems Research

Rethinking floating point for deep learning

Jeff Johnson

December 06, 2018

June 22, 2015

Systems Research

NLP

Fast Convolutional Nets With fbfft: A GPU Performance Evaluation | Facebook AI Research

Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun

June 22, 2015

December 07, 2018

Systems Research

Rethinking floating point for deep learning | Facebook AI Research

Jeff Johnson

December 07, 2018

September 01, 2020

Systems Research

ResiliNet: Failure-Resilient Inference in Distributed Neural Networks

Ashkan Yousefpour, Brian Q. Nguyen, Siddartha Devic, Guanhua Wang, Aboudy Kreidieh, Hans Lobel, Alexandre M. Bayen, Jason P. Jue

September 01, 2020

March 02, 2020

Systems Research

Federated Optimization in Heterogenous Networks | Facebook AI Research

Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

March 02, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.