April 27, 2020
The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2019). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.
August 01, 2019
Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang
August 01, 2019
July 29, 2019
Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O.K. Li
July 29, 2019
June 11, 2019
Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach
June 11, 2019
June 10, 2019
Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato
June 10, 2019