June 21, 2020
Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models—in all languages except English—very limited. In this paper, we investigate the feasibility of training monolingual Transformer based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks. We show that the use of web crawled data is preferable to the use of Wikipedia data. More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB). Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks.
Written by
Louis MartinBenjamin Muller
Pedro Javier Ortiz Suárez
Yoann Dupont
Laurent Romary
Éric Villemonte de la Clergerie
Djamé Seddah
Benoît Sagot
Publisher
Association for Computational Linguistics (ACL)
Research Topics
Natural Language ProcessingSeptember 10, 2019
A core problem of information retrieval (IR) is relevance matching, which is to rank documents by relevance to a user’s query. On the other hand, many NLP problems, such as question answering and paraphrase identification, can be considered…
Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin
September 10, 2019
June 11, 2019
While significant progress has been made in the image captioning task, video description is still in its infancy due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the…
Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach
June 11, 2019
May 17, 2019
We consider the problem of aligning continuous word representations, learned in multiple languages, to a common space. It was recently shown that, in the case of two languages, it is possible to learn such a mapping without supervision. This…
Jean Alaux, Edouard Grave, Marco Cuturi, Armand Joulin
May 17, 2019
July 27, 2019
Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is…
Patrick Lewis, Ludovic Denoyer, Sebastian Riedel
July 27, 2019