May 06, 2020
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TABERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBert is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TABERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.
Publisher
ACL
Research Topics
May 22, 2023
Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, Michael Auli
May 22, 2023
February 24, 2023
Faisal Azhar, Hugo Touvron, Armand Joulin, Aurelien Rodriguez, Baptiste Rozière, Eric Hambro, Gautier Izacard, Guillaume Lample, Marie-Anne Lachaux, Naman Goyal, Thibaut Lavril, Timothee Lacroix, Xavier Martinet, Edouard Grave
February 24, 2023
February 20, 2023
Maziar Sanjabi, Aaron Chan, Hamed Firooz, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren
February 20, 2023
December 31, 2022
Yossef Mordechay Adi, Abdelrahman Mohamed, Adam Polyak, Emmanuel Dupoux, Evgeny Kharitonov, Jade Copet, Morgane Rivière, Tu Anh Nguyen, Wei-Ning Hsu, Felix Kreuk
December 31, 2022
Latest Work
Our Actions
Newsletter