Ahmad Beirami

Ahmad is a Research Scientist at Facebook AI, contributing to conversational AI research. His current research interests include reinforcement learning, meta-learning, adversarial learning, federated learning, and privacy/fairness in machine learning. He received his BS degree in 2007 from Sharif University of Technology, Iran, and his PhD degree in 2014 from the Georgia Institute of Technology, in electrical and computer engineering. Ahmad is the recipient of the 2015 Sigma Xi Best PhD Thesis Award from Georgia Tech.

Ahmad's Publications

May 07, 2021

RESEARCH

SYSTEMS RESEARCH

Ditto: Fair and robust federated learning through personalization

Fairness and robustness are two important concerns for federated learning systems. …

Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith

May 07, 2021

December 11, 2020

RESEARCH

CORE MACHINE LEARNING

Federated Multi-Task Learning for Competing Constraints

In addition to accuracy, fairness and robustness are two critical concerns for federated learning systems.…

Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith

December 11, 2020

December 08, 2020

CONVERSATIONAL AI

NLP

Situated and Interactive Multimodal Conversations

Next generation virtual assistants are envisioned to handle multimodal inputs (e.g., vision, memories of previous interactions, and the user’s utterances)…

Seungwhan Moon, Satwik Kottur, Paul A. Crook, Ankita De, Shivani Poddar, Theodore Levin, David Whitney, Daniel Difranco, Ahmad Beirami , Eunjoon Cho, Rajen Subba, Alborz Geramifard

December 08, 2020

December 08, 2020

CONVERSATIONAL AI

NLP

Resource Constrained Dialog Policy Learning via Differentiable Inductive Logic Programming

Motivated by the needs of resource constrained dialog policy learning, we introduce dialog policy via differentiable inductive logic (DILOG).…

Zhenpeng Zhou, Ahmad Beirami Paul Crook, Pararth Shah, Rajen Subba, Alborz Geramifard

December 08, 2020

May 07, 2021

CORE MACHINE LEARNING

THEORY

Tilted Empirical Risk Minimization

Empirical risk minimization (ERM) is typically designed to perform well on the average loss, which can result in estimators that are sensitive to outliers …

Tian Li, Ahmad Beirami, Maziar Sanjabi, Virginia Smith

May 07, 2021

February 14, 2020

RESEARCH

Fair Resource Allocation in Federated Learning

Federated learning involves training statistical models in massive, heterogeneous networks. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we…

Tian Li, Maziar Sanjabi, Ahmad Beirami, Virginia Smith,

February 14, 2020