RESEARCH

SYSTEMS RESEARCH

Ditto: Fair and robust federated learning through personalization

May 7, 2021

Abstract

Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.

Download the Paper

AUTHORS

Written by

Tian Li

Shengyuan Hu

Ahmad Beirami

Virginia Smith

Publisher

ICLR 2021

Research Topics

Systems Research

Related Publications

December 07, 2018

SYSTEMS RESEARCH

Rethinking floating point for deep learning | Facebook AI Research

Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many…

Jeff Johnson

December 07, 2018

June 22, 2015

SYSTEMS RESEARCH

NLP

Fast Convolutional Nets With fbfft: A GPU Performance Evaluation | Facebook AI Research

We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA’s cuFFT…

Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun

June 22, 2015

March 02, 2020

SYSTEMS RESEARCH

Federated Optimization in Heterogenous Networks | Facebook AI Research

Federated Learning is a distributed learning paradigm with two key challenges that differentiate it from traditional distributed optimization: (1) significant variability in terms of the systems characteristics on each device in the network…

Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

March 02, 2020

September 01, 2020

SYSTEMS RESEARCH

ResiliNet: Failure-Resilient Inference in Distributed Neural Networks

Techniques such as Federated Learning and Split Learning aim to train distributed deep learning models without sharing private data.…

Ashkan Yousefpour, Brian Q. Nguyen, Siddartha Devic, Guanhua Wang, Aboudy Kreidieh, Hans Lobel, Alexandre M. Bayen, Jason P. Jue

September 01, 2020

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.