Abstract

In this talk, we explore the quest for universal representations in graph foundation models—a challenging goal in modern graph machine learning. We begin by dissecting the inherent tensions between positional and structural node embeddings in graphs, highlighting how to overcome task-specific symmetries and invariances when creating effective universal embeddings. We then examine the role of invariances in statistical tests in addressing challenges posed by distinct attribute domains across graph datasets. The talk concludes with novel applications of graph learning to algorithmic reasoning, particularly in real-world network optimization problems.

Video Recording