NLP

RESEARCH

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

December 16, 2020

Abstract

Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit non-parametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.

Download the Paper

AUTHORS

Written by

Patrick Lewis

Ethan Perez

Aleksandra Piktus

Fabio Petroni

Vladimir Karpukhin

Naman Goyal

Heinrich Küttler

Mike LewisWen-tau YihTim RocktäschelSebastian RiedelDouwe Kiela

Publisher

NeurIPS 2020

Related Publications

August 01, 2019

NLP

Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives | Facebook AI Research

This paper tackles the problem of reading comprehension over long narratives where documents easily span over thousands of tokens. We propose a curriculum learning (CL) based Pointer-Generator framework for reading/sampling over large…

Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang

August 01, 2019

July 29, 2019

NLP

Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations | Facebook AI Research

Zero-shot translation, translating between language pairs on which a Neural Machine Translation (NMT) system has never been trained, is an emergent property when training the system in multilingual settings. However, naïve training for…

Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O.K. Li

July 29, 2019

July 29, 2019

NLP

Word-order biases in deep-agent emergent communication | Facebook AI Research

Sequence-processing neural networks led to remarkable progress on many NLP tasks. As a consequence, there has been increasing interest in understanding to what extent they process language as humans do. We aim here to uncover which biases such…

Rahma Chaabouni, Eugene Kharitonov, Alessandro Lazaric, Emmanuel Dupoux, Marco Baroni

July 29, 2019

June 11, 2019

NLP

COMPUTER VISION

Adversarial Inference for Multi-Sentence Video Description | Facebook AI Research

While significant progress has been made in the image captioning task, video description is still in its infancy due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the…

Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach

June 11, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.