NLP

RESEARCH

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

December 16, 2020

Abstract

Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.

Download the Paper

AUTHORS

Written by

Patrick Lewis

Ethan Perez

Aleksandra Piktus

Fabio Petroni

Vladimir Karpukhin

Naman Goyal

Heinrich Küttler

Mike LewisWen-tau YihTim RocktäschelSebastian RiedelDouwe Kiela

Publisher

NeurIPS 2020

Related Publications

June 03, 2019

NLP

FAIRSEQ: A Fast, Extensible Toolkit for Sequence Modeling | Facebook AI Research

FAIRSEQ is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports…

Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli

June 03, 2019

June 02, 2019

NLP

Cooperative Learning of Disjoint Syntax and Semantics | Facebook AI Research

There has been considerable attention devoted to models that learn to jointly infer an expression’s syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct…

Serhii Havrylov, Germán Kruszewski, Armand Joulin

June 02, 2019

October 30, 2018

NLP

Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion | Facebook AI Research

Continuous word representations learned separately on distinct languages can be aligned so that their words become comparable in a common space. Existing works typically solve a least-square regression problem to learn a rotation aligning a…

Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, Edouard Grave

October 30, 2018

October 31, 2018

NLP

Understanding Back-Translation at Scale | Facebook AI Research

An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and…

Sergey Edunov, Myle Ott, Michael Auli, David Grangier

October 31, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.