Image

A GNN is a function that takes graphs (with node features) and returns vectors in a Euclidean space. Since the input space of a GNN is non-Euclidean, i.e., graphs can be of any size and topology, it is more challenging to analyze GNNs than standard neural networks. In this talk, I will demonstrate how one classical result in graph theory, the Szemerédi Regularity Lemma, leads to theoretical results for GNNs, including generalization bounds, as well as to novel GNN designs that scale very well for large graphs.