NLP

Baked-in State Probing

December 15, 2022

Abstract

Neural language models have been analyzed for their linguistic and extra-linguistic knowledge via probing. Of particular interest has been the following question: how much can a language model trained only on form learn about meaning? Recent work has demonstrated via probing classifiers that in the setting of simple procedural text, where by “meaning" we mean the underlying world state, language models have a non-trivial performance on world state tracking. However, our proposed evaluation based on model predictions shows differing results, suggesting that these models are either not capturing the world state or not using it. How do these results change if the model has access to the world state? We explore this alternate setting with access to the underlying world state only during training and investigate ways of “baking in” the state knowledge along with the primary task of language modeling. Our proposed approaches allow for state probing during inference simply via text prompts, avoiding any probing classifier machinery. In terms of performance, we show that baking in the state knowledge during training leads to significant improvements in state tracking performance and text generation quality.

Download the Paper

AUTHORS

Written by

Shubham Toshniwal

Karen Livescu

Kevin Gimpel

Sam Wiseman

Publisher

EMNLP

Related Publications

February 24, 2023

NLP

LLaMA: Open and Efficient Foundation Language Models

Faisal Azhar, Hugo Touvron, Armand Joulin, Aurelien Rodriguez, Baptiste Rozière, Eric Hambro, Gautier Izacard, Guillaume Lample, Marie-Anne Lachaux, Naman Goyal, Thibaut Lavril, Timothee Lacroix, Xavier Martinet, Edouard Grave

February 24, 2023

February 20, 2023

INTEGRITY

NLP

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction

Maziar Sanjabi, Aaron Chan, Hamed Firooz, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren

February 20, 2023

December 31, 2022

NLP

Textless Speech Emotion Conversion using Discrete & Decomposed Representations

Yossef Mordechay Adi, Abdelrahman Mohamed, Adam Polyak, Emmanuel Dupoux, Evgeny Kharitonov, Jade Copet, Morgane Rivière, Tu Anh Nguyen, Wei-Ning Hsu, Felix Kreuk

December 31, 2022

December 29, 2022

NLP

Staircase Attention for Recurrent Processing of Sequences

Dexter Ju, Jason Weston, Sainbayar Sukhbaatar, Stephen Roller

December 29, 2022

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.