Ves Stoyanov

Ves is a Research Scientist at Facebook AI Research (FAIR), focusing on Natural Language Processing (NLP). Prior to FAIR, Ves researched NLP uses for search on Facebook's search team. He spent three years as a postdoc at the Center for Language and Speech Processing at Johns Hopkins University, where he worked on machine learning for structured prediction and was supported by a Computing Innovation Fellowship from the Computing Research Association. Ves holds a Ph.D. from Cornell University, where he researched opinion analysis and was supported by a NSF Graduate Research Fellowship.

Ves's Publications

December 23, 2020

RESEARCH

NLP

XNLI: Evaluating Cross-lingual Sentence Representations

State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used…

Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, Ves Stoyanov,

December 23, 2020

December 23, 2020

RESEARCH

NLP

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and…

Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer

December 23, 2020

December 23, 2020

NLP

ML APPLICATIONS

Emerging Cross-lingual Structure in Pretrained Language Models

We study the problem of multilingual masked language modeling, i.e. the training of a single model on concatenated text from multiple languages, and present a detailed…

Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, Veselin Stoyanov

December 23, 2020