RESEARCH

SPEECH & AUDIO

Cloze-driven Pretraining of Self-attention Networks

October 23, 2019

Abstract

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text. Experiments demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with BERT. We also present a detailed analysis of a number of factors that contribute to effective pretraining, including data domain and size, model capacity, and variations on the cloze objective.

Download the Paper

Related Publications

December 15, 2021

RESEARCH

Sample-and-threshold differential privacy: Histograms and applications

Akash Bharadwaj, Graham Cormode

December 15, 2021

August 30, 2021

SPEECH & AUDIO

NLP

A Two-stage Approach to Speech Bandwidth Extension

Yun Wang, Christian Fuegen, Didi Zhang, Gil Keren, Kaustubh Kalgaonkar, Ju Lin

August 30, 2021

January 09, 2021

RESEARCH

COMPUTER VISION

Tarsier: Evolving Noise Injection in Super-Resolution GANs

Baptiste Rozière, Camille Couprie, Olivier Teytaud, Andry Rasoanaivo, Hanhe Lin, Nathanaël Carraz Rakotonirina, Vlad Hosu

January 09, 2021

January 09, 2021

RESEARCH

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

Jean Tarbouriech, Alessandro Lazaric, Matteo Pirotta, Michal Valko

January 09, 2021