Gabriel Synnaeve

Gabriel Synnaeve is a research scientist on the Facebook AI Research (FAIR) team, who joined as a postdoctoral researcher in 2015. Prior to Facebook, Gabriel was a postdoctoral fellow in Emmanuel Dupoux’s team at École Normale Supérieure in Paris, working on reverse-engineering the acquisition of language in babies. Gabriel received his PhD in Bayesian modeling applied to real-time strategy games AI from the University of Grenoble in 2012. Gabriel programmed a bot that placed 4th in the AAAI AIIDE 2012 StarCraft AI competition. In 2009, Gabriel worked on inductive logic programming applied to systems biology at the National Institute of Informatics in Tokyo.

Gabriel's Publications

June 19, 2020

RESEARCH

Scaling up online speech recognition using ConvNets

We design an online end-to-end speech recognition system based on Time-Depth Separable (TDS) convolutions and Connectionist Temporal Classification (CTC). The system has almost three times the throughput of a well tuned hybrid ASR baseline while also having lower latency and a better word error rate. We improve the core TDS architecture in order to …

Vineel Pratap, Qiantong Xu, Jacob Kahn, Gilad Avidov, Tatiana Likhomanenko, Awni Hannun, Vitaliy Liptchinsky, Gabriel Synnaeve, Ronan Collobert

June 19, 2020

June 19, 2020

RESEARCH

A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning

Effective coordination is crucial to solve multi-agent collaborative (MAC) problems. While centralized reinforcement learning methods can optimally solve small MAC instances, they do not scale to large problems and they fail to generalize to scenarios different from those seen during training. In this paper, we consider MAC problems with some…

Nicolas Carion, Gabriel Synnaeve, Alessandro Lazaric, Nicolas Usunier

June 19, 2020

June 19, 2020

RESEARCH

Who Needs Words? Lexicon-Free Speech Recognition

Lexicon-free speech recognition naturally deals with the problem of out-of-vocabulary (OOV) words. In this paper, we show that character-based language models (LM) can perform as well as word-based LMs for speech recognition, in word error rates (WER), even without restricting the decoding to a lexicon. We study character-based LMs and show that…

Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert

June 19, 2020