Results 1431 - 1440 of 23852
Scaling graph neural networks (GNNs) is crucial in modern applications. For this purpose, a rich line of sampling‑based approaches (neighborhood, layer‑wise, cluster, and subgraph sampling) has made GNNs practically scalable. In this talk, I will briefly survey these sampling‑based GNN methods and then develop how the local graph‑limit (Benjamini–Schramm) perspective offers a clean, potentially unifying tool for the theoretical understanding of sampling‑based GNNs. Leveraging this perspective, we prove that, under mild assumptions, parameters learned from training GNNs on small, fixed‑size samples of a large input graph are within an $\epsilon$‑neighborhood of those obtained by training the same architecture on the entire graph. We derive bounds on the number of samples, the subgraph size, and the training steps required. Our results offer a principled explanation for the empirical success of training on subgraph samples, aligning with the literature’s notion of transferability. This is based on joint work with Luana Ruiz and Amin Saberi.
Graphons are powerful tools for modeling large-scale graphs, serving both as limit objects for dense graph sequences and as generative models for random graphs. This bootcamp introduces graphons from a machine learning (ML) perspective, with an emphasis on their applications in graph information processing and graph neural networks (GNNs). We will begin with the mathematical foundations of graphon theory, including homomorphism densities, cut distance, sampling, dense graph convergence, and convergence of spectra. From there, we explore how graphons can be used to formalize the convergence of convolutional architectures on convergent sequences of graphs, and what this reveals about the transferability of GNNs trained on subsampled graph data. We will also discuss recent advances in graphon-based ML, practical limitations of the graphon model in modern ML, and alternative approaches for capturing structure in sparser large graphs.