[Mar 29th 2023] Learning Identity Effects with Graph Neural Networks


When: Mar 29th, 2023 – 15:00 – 15:30 AM
Where: Google meet link

Learning Identity Effects with Graph Neural Networks

Graph Neural Networks (GNNs) have emerged in the past years as a powerful tool to learn tasks on a wide range of graph domains in a data-driven fashion; among all the proposed models, the so-called Message-Passing GNNs (MP-GNNs) have gained more and more popularity for their intuitive formulation, strictly linked with the Weisfeiler-Lehman (WL) test for graph isomorphism, which they have been proven equivalent to. From a theoretical point of view, MP-GNNs have been shown to be universal approximators and their generalization capabilities have been recently investigated for MP-GNNs, under a variety of generalization measures (VC-dimension, Rademacher complexity, etc).

The aim of our work is to show the potential and practical limitations on the generalization capabilities of MP-GNNs in their (in)ability to learn the so-called identity effects, i.e. the task of determining if an object is composed of two identical patterns or not. We analyze two case studies: two-letters words, extending the existing results for MLPs and RNNs presented in [1], for which it is shown that, using SGD training, the neural network is not able to generalize to unseen patterns, since the training set is invariant to orthogonal transformations; dicyclic graphs, i.e. graphs composed by two cycles of any length, for which we present positive theoretical results, partially supported by numerical results.

The theoretical analysis is supported by an extensive experimental study.

[1] Brugiapaglia, Simone, M. Liu, and Paul Tupper. “Invariance, encodings, and generalization: learning identity effects with neural networks.” Neural Computation 34.8 (2022): 1756-1789.

 |  Category: Seminars