Skip to main content

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
Berkeley University of California
Home Home

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Results 1431 - 1440 of 23852

Workshop Talk
|
Aug. 12, 2025

A Local Graph Limits Perspective on Sampling-Based GNNs

Scaling graph neural networks (GNNs) is crucial in modern applications. For this purpose, a rich line of sampling‑based approaches (neighborhood, layer‑wise, cluster, and subgraph sampling) has made GNNs practically scalable. In this talk, I will briefly survey these sampling‑based GNN methods and then develop how the local graph‑limit (Benjamini–Schramm) perspective offers a clean, potentially unifying tool for the theoretical understanding of sampling‑based GNNs. Leveraging this perspective, we prove that, under mild assumptions, parameters learned from training GNNs on small, fixed‑size samples of a large input graph are within an $\epsilon$‑neighborhood of those obtained by training the same architecture on the entire graph. We derive bounds on the number of samples, the subgraph size, and the training steps required. Our results offer a principled explanation for the empirical success of training on subgraph samples, aligning with the literature’s notion of transferability. This is based on joint work with Luana Ruiz and Amin Saberi.

Workshop Talk
|
Aug. 12, 2025

Boot camp on logic and graph learning

Abstract not available.

Video
|
Aug. 12, 2025
Machine learning on point clouds and other varying-size objects
Video
|
Aug. 12, 2025
Toward Universal Graph Representations: Foundations and Frontiers of Graph Foundation Models
Video
|
Aug. 12, 2025
Size (OOD) Generalization of Neural Models via Algorithmic Alignment
Video
|
Aug. 12, 2025
Szemerédi Regularity Lemma in Graph Machine Learning
Video
|
Aug. 12, 2025
Boot camp on generalization theory for graph learning
Workshop Talk
|
Aug. 11, 2025

Boot camp on graph foundations models

Abstract not available.

Workshop Talk
|
Aug. 11, 2025

Boot camp on graphons for graph learning

Graphons are powerful tools for modeling large-scale graphs, serving both as limit objects for dense graph sequences and as generative models for random graphs. This bootcamp introduces graphons from a machine learning (ML) perspective, with an emphasis on their applications in graph information processing and graph neural networks (GNNs). We will begin with the mathematical foundations of graphon theory, including homomorphism densities, cut distance, sampling, dense graph convergence, and convergence of spectra. From there, we explore how graphons can be used to formalize the convergence of convolutional architectures on convergent sequences of graphs, and what this reveals about the transferability of GNNs trained on subsampled graph data. We will also discuss recent advances in graphon-based ML, practical limitations of the graphon model in modern ML, and alternative approaches for capturing structure in sparser large graphs.

Workshop Talk
|
Aug. 11, 2025

Boot camp on invariances in graph learning

Abstract not available.

Pagination

  • Previous page Previous
  • Page 142
  • Page 143
  • Current page 144
  • Page 145
  • Page 146
  • Next page Next
Home
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science.

Footer

  • Programs & Events
  • Participate
  • Workshops & Symposia
  • Contact Us
  • Calendar
  • Accessibility

Footer social media

  • Twitter
  • Facebook
  • Youtube
© 2013–2026 Simons Institute for the Theory of Computing. All Rights Reserved.
link to homepage

Main navigation

  • Programs & Events
    • Research Programs
    • Workshops & Symposia
    • Public Lectures
    • Research Pods
    • Internal Program Activities
    • Algorithms, Society, and the Law
  • Participate
    • Apply to Participate
    • Propose a Program
    • Postdoctoral Research Fellowships
    • Law and Society Fellowships
    • Science Communicator in Residence Program
    • Circles
    • Breakthroughs Workshops and Goldwasser Exploratory Workshops
  • People
    • Scientific Leadership
    • Staff
    • Current Long-Term Visitors
    • Research Fellows
    • Postdoctoral Researchers
    • Scientific Advisory Board
    • Governance Board
    • Affiliated Faculty
    • Science Communicators in Residence
    • Law and Society Fellows
    • Chancellor's Professors
  • News & Videos
    • News
    • Videos
  • Support for the Institute
    • Annual Fund
    • All Funders
    • Institutional Partnerships
  • For Visitors
    • Visitor Guide
    • Plan Your Visit
    • Location & Directions
    • Accessibility
    • Building Access
    • IT Guide
  • About

Utility navigation

  • Calendar
  • Contact
  • Login
  • MAKE A GIFT
link to homepage