Research

Combining label propagation and simple models outperforms graph neural networks

May 4, 2021

Abstract

Graph Neural Networks (GNNs) are the predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance. Here, we show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure: (i) an “error correlation” that spreads residual errors in training data to correct errors in test data and (ii) a “prediction correlation” that smooths the predictions on the test data. We call this overall procedure Correct and Smooth (C&S), and the post-processing steps are implemented via simple modifications to standard label propagation techniques from early graph-based semi-supervised learning methods. Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks, with just a small fraction of the parameters and orders of magnitude faster runtime. For instance, we exceed the best known GNN performance on the OGB-Products dataset with 137 times fewer parameters and greater than 100 times less training time. The performance of our methods highlights how directly incorporating label information into the learning algorithm (as was done in traditional techniques) yields easy and substantial performance gains. We can also incorporate our techniques into big GNN models, providing modest gains.

Download the Paper

AUTHORS

Written by

Qian Huang

Horace He

Abhay Singh

Ser-Nam Lim

Publisher

ICLR 2021

Research Topics

Speech and Audio

Related Publications

November 19, 2020

Speech & Audio

Generating Fact Checking Briefs

Angela Fan, Aleksandra Piktus, Antoine Bordes, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Sebastian Riedel, Andreas Vlachos

November 19, 2020

November 09, 2020

Speech & Audio

Multilingual AMR-to-Text Generation

Angela Fan

November 09, 2020

October 26, 2020

Speech & Audio

Deep Multilingual Transformer with Latent Depth

Xian Li, Asa Cooper Stickland, Xiang Kong, Yuqing Tang

October 26, 2020

October 25, 2020

Speech & Audio

Hide and Speak: Towards Deep Neural Networks for Speech Steganography

Yossef Mordechay Adi, Bhiksha Raj, Felix Kreuk, Joseph Keshet, Rita Singh

October 25, 2020

December 11, 2019

Speech & Audio

Computer Vision

Hyper-Graph-Network Decoders for Block Codes | Facebook AI Research

Eliya Nachmani, Lior Wolf

December 11, 2019

April 30, 2018

NLP

Speech & Audio

Identifying Analogies Across Domains | Facebook AI Research

Yedid Hoshen, Lior Wolf

April 30, 2018

April 30, 2018

Speech & Audio

VoiceLoop: Voice Fitting and Synthesis via a Phonolgoical Loop | Facebook AI Research

Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani

April 30, 2018

July 11, 2018

Speech & Audio

Fitting New Speakers Based on a Short Untranscribed Sample | Facebook AI Research

Eliya Nachmani, Adam Polyak, Yaniv Taigman, Lior Wolf

July 11, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.