CORE MACHINE LEARNING

SYSTEMS RESEARCH

Green Federated Learning

April 26, 2023

Abstract

The rapid progress of AI is fueled by increasingly large and computationally intensive machine learning models and datasets. As a consequence, the amount of compute used in training state-of-the-art models is exponentially increasing (doubling every 10 months between 2015 and 2022), resulting in a large carbon footprint. Federated Learning (FL) - a collaborative machine learning technique for training a centralized model using data of decentralized entities - can also be resource-intensive and have a significant carbon footprint, particularly when deployed at scale. Unlike centralized AI that can reliably tap into renewables at strategically placed data centers, cross-device FL may leverage as many as hundreds of millions of globally distributed end-user devices with diverse energy sources. Green AI is a novel and important research area where carbon footprint is regarded as an evaluation criterion for AI, alongside accuracy, convergence speed, and other metrics. In this paper, we propose the concept of Green FL, which involves optimizing FL parameters and making design choices to minimize carbon emissions consistent with competitive performance and training time. The contributions of this work are two-fold. First, we adopt a data-driven approach to quantify the carbon emissions of FL by directly measuring real-world at-scale FL tasks running on millions of phones. Second, we present challenges, guidelines, and lessons learned from studying the trade-off between energy efficiency, performance, and time-to-train in a production FL system. Our findings offer valuable insights into how FL can reduce its carbon footprint, and they provide a foundation for future research in the area of Green AI.

Download the Paper

AUTHORS

Written by

Ashkan Yousefpour

Shen Guo

Ashish Shenoy

Sayan Ghosh

Pierre Stock

Kiwan Maeng

Schalk Krüger

Mike Rabbat

Carole-Jean Wu

Ilya Mironov

Publisher

Arxiv

Research Topics

Systems Research

Core Machine Learning

Related Publications

May 04, 2023

ROBOTICS

REINFORCEMENT LEARNING

MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations

Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran

May 04, 2023

May 01, 2023

THEORY

CORE MACHINE LEARNING

Meta-Learning in Games

Keegan Harris, Ioannis Anagnostides, Gabriele Farina, Mikhail Khodak, Zhiwei Steven Wu, Tuomas Sandholm, Maria-Florina Balcan

May 01, 2023

February 28, 2023

CORE MACHINE LEARNING

On the duality between contrastive and non-contrastive self-supervised learning

Quentin Garrido, Adrien Bardes, Yann LeCun, Yubei Chen, Laurent Najman

February 28, 2023

February 21, 2023

COMPUTER VISION

CORE MACHINE LEARNING

ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural Networks

Felix Xu, Fuyuan Zhang, Hua Qi, Jianjun Zhao, Jianlang Chen, Lei Ma, Qing Guo, Zhijie Wang

February 21, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.