Abstract
The last few years have seen a deluge of discrete to continuum convergence results for various problems in graph-based learning. This theory makes connections between discrete machine learning and continuum partial differential equations or variational problems, leading to new insights and better algorithms. Current works use either a gamma-convergence framework, leading to convergence rates in energy norms (i.e., L2), or PDE techniques like the maximum principle, which give uniform convergence rates, albeit with more restrictive conditions on the graph bandwidth that often rule out the sparse graphs used in practice.
In this work we revisit the continuum limit of Lipschitz learning, and show how to extend uniform convergence rates to any graph length scale strictly above graph connectivity. The proof involves utilizing a homogenized non-local operator with a much larger bandwidth. We will sketch the ideas in the proof and indicate how the approach may be used in other problems, like spectral convergence of graph Laplacians.
Joint work with Leon Bungert and Tim Roith.