BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

July 8, 2020


We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pre- training schemes. We evaluate a number of noising approaches, finding the best perfor- mance by both randomly shuffling the order of sentences and using a novel in-filling scheme,where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive di-alogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.

Download the Paper


Written by

Mike LewisYinhan Liu

Naman Goyal

Marjan Ghazvininejad

Abdelrahman Mohamed

Omer Levy

Ves StoyanovLuke Zettlemoyer


Association for Computational Linguistics (ACL)

Related Publications

August 01, 2019


Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives | Facebook AI Research

This paper tackles the problem of reading comprehension over long narratives where documents easily span over thousands of tokens. We propose a curriculum learning (CL) based Pointer-Generator framework for reading/sampling over large…

Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang

August 01, 2019

July 29, 2019


Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations | Facebook AI Research

Zero-shot translation, translating between language pairs on which a Neural Machine Translation (NMT) system has never been trained, is an emergent property when training the system in multilingual settings. However, naïve training for…

Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O.K. Li

July 29, 2019

July 29, 2019


Word-order biases in deep-agent emergent communication | Facebook AI Research

Sequence-processing neural networks led to remarkable progress on many NLP tasks. As a consequence, there has been increasing interest in understanding to what extent they process language as humans do. We aim here to uncover which biases such…

Rahma Chaabouni, Eugene Kharitonov, Alessandro Lazaric, Emmanuel Dupoux, Marco Baroni

July 29, 2019

June 11, 2019



Adversarial Inference for Multi-Sentence Video Description | Facebook AI Research

While significant progress has been made in the image captioning task, video description is still in its infancy due to the complex nature of video data. Generating multi-sentence descriptions for long videos is even more challenging. Among the…

Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach

June 11, 2019

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.