Results 1421 - 1430 of 23852
In 1967, Lovász proved that two graphs G and H are isomorphic if, and only if, they are homomorphism indistinguishable over the the family of all graphs, i.e. for all graphs F the number of homomorphisms from F to G is equal to the number of homomorphisms from F to H. In recent years, many natural relaxations of graph isomorphism from fields as diverse as quantum information theory, algebraic graph theory, convex optimisation, and category theory have been characterised as homomorphism indistinguishability relations over restricted graph classes. Furthermore, homomorphism indistinguishability has proven useful for analysing graph learning architectures as demonstrated by Zhang, Gai, Du, Ye, He, & Wang (2024) and Gai, Du, Zhang, Maron, & Wang (2025).
Abstracting from the wealth of these results, we set out to develop a theory of homomorphism indistinguishability that provides insights into the descriptive and computational complexity of graph isomorphism relaxations. That is, for a graph class 𝓕, we ask
- what is the distinguishing power of homomorphism counts from graphs F ∊ 𝓕 and
- what is the complexity of deciding homomorphism indistinguishability over 𝓕?
In this talk, I will discuss some ingredients to better understand the behavior of graph machine learning, and especially GNNs, on large random graphs. I will present the random geometric graph model from the probability & statistics community, and how we can draw some conclusions regarding the convergence and the stability of some deep architectures. Based on joint works w/ A. Bietti, M. Cordonnier, N. Keriven, N. Tremblay
Scaling graph neural networks (GNNs) is crucial in modern applications. For this purpose, a rich line of sampling‑based approaches (neighborhood, layer‑wise, cluster, and subgraph sampling) has made GNNs practically scalable. In this talk, I will briefly survey these sampling‑based GNN methods and then develop how the local graph‑limit (Benjamini–Schramm) perspective offers a clean, potentially unifying tool for the theoretical understanding of sampling‑based GNNs. Leveraging this perspective, we prove that, under mild assumptions, parameters learned from training GNNs on small, fixed‑size samples of a large input graph are within an $\epsilon$‑neighborhood of those obtained by training the same architecture on the entire graph. We derive bounds on the number of samples, the subgraph size, and the training steps required. Our results offer a principled explanation for the empirical success of training on subgraph samples, aligning with the literature’s notion of transferability. This is based on joint work with Luana Ruiz and Amin Saberi.