December 06, 2021
Theoretical analyses for graph learning methods often assume a complete observation of the input graph. Such an assumption might not be useful for handling any-size graphs due to the scalability issues in practice. In this work, we develop a theoretical framework for graph classification problems in the partial observation setting (i.e., subgraph samplings). Equipped with insights from graph limit theory, we propose a new graph classification model that works on a randomly sampled subgraph and a novel topology to characterize the representability of the model. Our theoretical framework contributes a theoretical validation of mini-batch learning on graphs and leads to new learning-theoretic results on generalization bounds as well as size-generalizability without assumptions on the input.
December 06, 2021
Bilal Alsallakh, Narine Kokhlikyan, Vivek Miglani, Shubham Muttepawar, Edward Wang (AI Infra), Sara Zhang, David Adkins, Orion Reblitz-Richardson
December 06, 2021
December 06, 2021
Yinglong Xia
December 06, 2021
December 06, 2021
Weizhe Hua, Yichi Zhang, Chuan Guo, Zhiru Zhang, Edward Suh
December 06, 2021
November 12, 2021
Karthik Abinav Sankararaman, Aleksandrs Slivkins
November 12, 2021