Mido Assran

Mido is a researcher at Facebook AI Research (FAIR) and Mila – Quebec AI Institute. He is an NSERC Vanier Scholar and holds a Vadasz Doctoral Fellowship in Engineering at McGill University. His research focuses on developing machine learning algorithms, with an emphasis on the data-/time-/energy-efficiency of learning. He is interested in optimization, distributed computing, and self-/semi-/weakly-supervised learning. His previous work has spanned both large-scale empirical analyses and theoretical studies.

Mido's Publications

July 31, 2020

RESEARCH

On the Convergence of Nesterov’s Accelerated Gradient Method in Stochastic Settings

We study Nesterov’s accelerated gradient method with constant step-size and momentum parameters in the stochastic approximation setting (unbiased gradients with bounded…

Mido Assran, Michael Rabbat

July 31, 2020

July 31, 2020

RESEARCH

Asynchronous Gradient-Push

We consider a multi-agent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents…

Mahmoud Assran, Michael Rabbat

July 31, 2020

July 31, 2020

RESEARCH

Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning

Multi-simulator training has contributed to the recent success of Deep Reinforcement Learning by stabilizing learning and allowing for higher training throughputs. We propose Gossip-based Actor-Learner Architectures (GALA) where several actor-learners…

Mahmoud Assran, Joshua Romoff, Nicolas Ballas, Joelle Pineau, Michael Rabbat

July 31, 2020