Domain-matched Pre-training Tasks for Dense Retrieval

May 06, 2022

Abstract

Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks. A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results. We show that, with the right pre-training setup, this barrier can be overcome. We demonstrate this by pre-training large bi-encoder models on 1) a recently released set of 65 million synthetically generated questions, and 2) 200 million post-comment pairs from a preexisting dataset of Reddit conversations made available by pushshift.io. We evaluate on a set of information retrieval and dialogue retrieval benchmarks, showing substantial improvements over supervised baselines.

Download the Paper

AUTHORS

Written by

Barlas Oguz

Aleksandra Piktus

Anchit Gupta

Kushal Lakhotia

Patrick Lewis

Scott Yih

Sebastian Riedel

Sonal Gupta

Vladimir Karpukhin

Xilun Chen

Yashar Mehdad

Publisher

ACL rolling review

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.