Results 1411 - 1420 of 23852
Many modern learning tasks require models that can take inputs of varying sizes. Consequently, dimension-independent architectures have been proposed for domains where the inputs are graphs, sets, and point clouds. In this talk, we define a new invariant machine learning model on point clouds, using ideas from Galois theory. Afterwards, we introduce a general framework for transferability across dimensions. We show that transferability corresponds precisely to continuity in a limit space formed by identifying small problem instances with equivalent large ones.
In this talk, we explore the quest for universal representations in graph foundation models—a challenging goal in modern graph machine learning. We begin by dissecting the inherent tensions between positional and structural node embeddings in graphs, highlighting how to overcome task-specific symmetries and invariances when creating effective universal embeddings. We then examine the role of invariances in statistical tests in addressing challenges posed by distinct attribute domains across graph datasets. The talk concludes with novel applications of graph learning to algorithmic reasoning, particularly in real-world network optimization problems.
Size (or length) generalization is a key challenge in designing neural modules to perform algorithmic tasks. Specifically, when can a neural model with bounded complexity generalize to problem instances of arbitrary size? The size generalization, is a special case of out-of-distribution (OOD) generalization. In this talk, I will focus on approaches to achieve size generalization by "aligning" the neural models with certain algorithmic structures. I will first present a theoretical result to show the benefit of algorithmic alignment: Specifically, we will show that a combination of algorithmic alignment, sparsity regularization, and a carefully curated small training set indeed enables provable size/length generalization in approximating the Bellman-Ford algorithm on arbitrary graphs. We will then present examples of designing practical and efficient neural models for a family of geometric optimization problems via algorithmic alignments. This talk is based on joint work with Riley Nerem, Samantha Chen, Sanjoy Dasgupta, and Anastasios Sidiropoulos.
A GNN is a function that takes graphs (with node features) and returns vectors in a Euclidean space. Since the input space of a GNN is non-Euclidean, i.e., graphs can be of any size and topology, it is more challenging to analyze GNNs than standard neural networks. In this talk, I will demonstrate how one classical result in graph theory, the Szemerédi Regularity Lemma, leads to theoretical results for GNNs, including generalization bounds, as well as to novel GNN designs that scale very well for large graphs.