HUMAN & MACHINE INTELLIGENCE

RESEARCH

Measuring Systematic Generalization in Neural Symbolic Reasoning with Transformers

November 30, 2020

Abstract

We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language. We investigate their systematic generalization abilities on a logical reasoning task in natural language, which involves reasoning over relationships between entities grounded in first-order logical proofs. Specifically, we perform soft theorem-proving by leveraging TLMs to generate natural language proofs. We test the generated proofs for logical consistency, along with the accuracy of the final inference. We observe length-generalization issues when evaluated on longer-than-trained sequences. However, we observe TLMs improve their generalization performance after being exposed to longer, exhaustive proofs. In addition, we discover that TLMs are able to generalize better using backward-chaining proofs compared to their forward-chaining counterparts, while they find it easier to generate forward chaining proofs. We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs. This suggests that Transformers have efficient internal reasoning strategies that are harder to interpret. These results highlight the systematic generalization behavior of TLMs in the context of logical reasoning, and we believe this work motivates deeper inspection of their underlying reasoning strategies.

Download the Paper

AUTHORS

Written by

Koustuv Sinha

Christopher Pal

Nicolas Gontier

Siva Reddy

Publisher

NeurIPS

Related Publications

October 04, 2023

HUMAN & MACHINE INTELLIGENCE

SPEECH & AUDIO

Decoding speech perception from non-invasive brain recordings

Alexandre Defossez, Charlotte Caucheteux, Jérémy Rapin, Ori Kabeli, Jean Remi King

October 04, 2023

July 10, 2023

HUMAN & MACHINE INTELLIGENCE

NLP

Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks

Tristan Thrush, Kushal Tirumala, Anmol Gupta, Max Bartolo, Pedro Rodriguez, Tariq Kane, William Gaviria Rojas, Peter Mattson, Adina Williams, Douwe Kiela

July 10, 2023

July 06, 2023

HUMAN & MACHINE INTELLIGENCE

REINFORCEMENT LEARNING

Augmented Language Models: a Survey

Gregoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, Thomas Scialom

July 06, 2023

June 21, 2023

HUMAN & MACHINE INTELLIGENCE

NLP

Benchmarking Compositionality with Formal Languages

Josef Valvoda, Naomi Saphra, Jonathan Rawski, Adina Williams, Ryan Cotterell

June 21, 2023

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.