Theory at the Institute and Beyond, January 2024

Max-flow-Nikil_Theory at the Institute_Resized for Newspage

by Nikhil Srivastava (senior scientist, Simons Institute for the Theory of Computing)

The rain in Berkeley is busy washing Calvin Lab for a shiny new semester at the Simons Institute. Familiar and unfamiliar faces trickle in as we begin this spring’s programs on Error-Correcting Codes: Theory and Practice and on Quantum Algorithms, Complexity, and Fault Tolerance. There is a great deal of interaction between these areas via the hot topic of quantum error-correcting codes, which has seen a wave of progress in recent years. The structure of the semester’s workshops reflects this synergy: there is a common boot camp for both programs, and each of the quantum workshops begins with a day of tutorials to welcome outsiders.

This is a good moment to reflect on some of the exciting happenings of the past semester. One subject very much in the air during the Data Structures and Optimization for Fast Algorithms program was recent progress on the classical question of computing a maximum st-flow in a directed graph. I like this topic not just because the results are surprising, but also because they leverage decades of understanding developed by the TCS community on a staggering array of topics spanning the discrete and continuous worlds: interior point methods, data structures, online algorithms, Laplacian solvers, expander graphs, oblivious routing, low-stretch trees, spanners, sparsifiers, and more. It is a truly modern body of work displaying the maturity and unity of the theory of algorithms, as well as its gainful interactions with neighboring fields such as optimization and numerical linear algebra.

The rest of this article is my attempt at a genealogy of the ideas behind the almost-linear time algorithm for maximum st-flow by Chen et al. and its further refinements. Some of the themes discussed below have been cultivated in previous Institute programs, notably Algorithmic Spectral Graph Theory (2014) and Bridging Continuous and Discrete Optimization (2017). I encourage you to watch Aaron Sidford’s stunning boot camp introduction for a thoughtful overview of the surrounding algorithmic landscape (and a good example of how to run a boot camp, for future organizers!).

Four decades of max flow
The min-cost flow problem is the following: given a directed graph G with nonnegative integer edge capacities and costs, a flow value F, and two distinguished vertices s and t, find a feasible st-flow of value F of minimum cost, if one exists. When the costs are all 1, this problem is the feasibility version of max flow.

Min-cost flow (with unit capacities) was first studied by A.N. Tolstoi in 1930 in the context of cheaply moving goods on a rail network in the then relatively young Soviet Union. Max flow was first studied by T.E. Harris and Gen. F.S. Ross in the United States during the 1950s, with the dual interest of finding the most efficient way to destroy the capability of the same Soviet rail network (corresponding to a minimum cut). Max flow is now taught in undergraduate CS courses and can be used to solve other fundamental graph problems such as bipartite matching and sparsest cut. It is a special case of linear programming.

More than 90 years after its introduction, in 2022, Chen, Kyng, Liu, Peng, Probst Gutenberg, and Sachdeva [CKLPPS'22] designed an almost-linear time algorithm for exactly solving min-cost flow on general directed graphs with integer capacities and costs. The paper masterfully combines two ingredients from continuous and discrete algorithms — specifically, a certain kind of interior point method (IPM) for linear programming, and a randomized data structure for quickly producing the updates required by that IPM, exploiting the fact that the flow changes slowly over the course of the algorithm. There has recently been a further distillation of these techniques in Chen, Kyng, Liu, Meierhans, and Probst Gutenberg [CKLMP’23], so it is a natural time to appreciate what has been done.

Let me briefly explain the two components and their historical evolution. The first is a potential reduction interior point method for linear programming, originated by Karmarkar (1984). The min-cost flow LP aims to minimize a linear objective inside a polytope, which is an affine section of a (scaled) ball. The idea of Karmarkar’s IPM is to define a smooth potential function on the interior of the polytope that blows up to -∞ at the facet/vertex on which the optimum is reached, and to +∞ elsewhere on the boundary, and then generate a sequence of iterates converging to its minimum by locally minimizing its quadratic Taylor series repeatedly. One striking thing about this method is that the potential is not convex, and yet repeated local optimization is guaranteed to reach a global minimum; the geometric reason for why this is expected comes from Karmarkar’s original framing of his algorithm in terms of projective transformations, but it is nonetheless not hard to prove once imagined. The reason for using IPMs is that they exhibit log(1/ϵ) type convergence, enabling an exact solution, as opposed to first-order methods like gradient descent, which can obtain only poly(1/ϵ) type convergence. First-order methods were used 10 years ago by Kelner, Lee, Orecchia, and Sidford (2014) and Sherman (2014) to obtain the first almost-linear time algorithms for approximate max flow on undirected graphs.

Minimizing the quadratic approximation of the potential at a point amounts to moving as far as possible along its gradient direction inside an ellipsoid, which is just a scaled 2 ball. The number of iterations required to approach the optimum is O(√m) for a graph with m edges, and each iterate corresponds to a flow in the graph. This iteration count arises from the distortion of √m between the 2 norm and the norm. It was observed by Daitch and Spielman in 2008 that for the min-cost flow LP, the 2 optimization problem arising in each iteration amounts to computing an electrical flow in a reweighting of the graph, which can be carried out in Õ(m) time using fast Laplacian solvers based on “combinatorial preconditioning” due to Spielman and Teng (2004). This immediately yielded an Õ(m1.5log(U)) algorithm for min-cost flow on graphs with integer capacities bounded by U, matching the best combinatorial algorithm for max flow due to Goldberg and Rao (1998). Starting with the breakthrough of Madry (2013), who obtained an exponent smaller than 1.5 for unit capacity graphs by carefully modifying a different IPM to reduce the number of iterations, a series of works over the past decade culminated in Kathuria, Liu, and Sidford (2020), who obtained a running time of Õ(m4/3+o(1)) for unit capacity graphs, corresponding to O(m1/3) iterations of the potential reduction IPM.

The dramatic idea of [CKLPPS'22] is to break from the “few expensive iterations” paradigm, which focused on optimizing the number of iterations while keeping each iteration linear time, and move to a world of “many cheap iterations.” Specifically, they replace the squared 2 norm term of the quadratic approximation appearing in each iteration of the IPM by a squared 1 norm term. The geometric distortion factor of m between the 1 norm and the norm yields a much coarser approximation of the potential, forcing this “1” IPM to take very small steps and resulting in an iteration count of O(m). This approach seems hopeless at the outset if one is aiming for an O(m1+o(1)) time algorithm — after all, how do you even update a vector of length m in subpolynomial time? — but it buys two crucial gains: (1) the updates in each step of the 1 IPM are vertices of a section of a scaled 1 ball, and correspond to certain cycles in an auxiliary undirected weighted graph called “min-ratio cycles.” In contrast, the updates of the original 2 IPM were electrical flows, which do not have a nice combinatorial structure. (2) The smallness of the steps means that the flow and auxiliary graph change very slowly, with only O(1) edges changing significantly in each iteration.

This brings us to the second ingredient, which is a data structure that computes an “approximate min-ratio cycle” in the slowly changing auxiliary graph in each step of the IPM and adds it to the current flow, all in mo(1) amortized time. Two phenomena enable such a low update time.

First, it turns out that the relevant cycles have very succinct implicit representations. Recall the duality between cycles and trees in undirected graphs: every spanning tree induces a collection of fundamental cycles indexed by off-tree edges. Flows that are linear combinations of fundamental cycles can be accessed by manipulating the corresponding trees and off-tree edges. These trees can in turn be efficiently accessed and updated in mo(1) amortized time using a data structure called link-cut trees, introduced by Sleator and Tarjan in 1983 (in fact, for the purpose of speeding up “blocking flow” algorithms for max flow), which were also a key ingredient in the Goldberg-Rao algorithm. Crucially, [CKLPPS'22] show that the set of fundamental cycles induced by a small random collection of low-stretch trees of the auxiliary graph — trees whose fundamental cycles are short on average — is guaranteed to contain an approximate min-ratio cycle. Thus, it suffices to work with flows that have succinct implicit representations. Low-stretch trees were introduced by Alon, Karp, Peleg, and West in 1995 and played a key role in the Spielman-Teng solver as well as in the combinatorial Laplacian solver of Kelner, Orecchia, Sidford, and Zhu (2014), which may be viewed as iteratively producing an electrical flow by efficiently combining fundamental cycles of a single low-stretch tree.

Second, [CKLPPS'22] show that an appropriate collection of low-stretch trees can be maintained and queried very efficiently in the slowly changing auxiliary graph. This is quite complicated, but the high-level “multiscale” approach is reminiscent of the Spielman-Teng Laplacian solver: they reduce the data structure problem on a graph H to the same problem on a smaller “coarsened” graph H’ while maintaining a way to transfer solutions between the two, and recurse. The smaller graph H’ is produced using vertex and edge sparsification techniques including expander decompositions, spanners, and more. These are well-studied primitives in graph algorithms, but the new twist is that they must be adapted to work on a changing graph. It is very important here that the auxiliary graph is (essentially) undirected, since none of these tools works for directed graphs. Undirectedness is one of the main gifts of the IPM, which is missing in the purely combinatorial framework of residual graphs and augmenting paths. In any case, when the dust settles, the data structure is able to implement each step of the IPM in amortized mo(1) time, yielding an almost-linear time algorithm.

There is one point in [CKLPPS'22] that was particularly subtle and (to me) intimidating. Namely, the questions that the IPM can ask the data structure in their algorithm depend on the data structure’s previous answers — a setup sometimes known as an adaptive (as opposed to oblivious) adversary in online algorithms. This means that the adversary can in principle “learn” the randomness inside the data structure and thereby foil it, a hazard reminiscent of “p-hacking” in the context of adaptive data analysis. The [CKLPPS'22] paper sidesteps the adaptivity issue by showing that the queries made by the IPM are not arbitrary and in fact have a very special structure — that it is a limited adversary, if you will. This special property allows them to show that the IPM must take a long time to degrade the data structure, which they can thereby afford to recompute on each occasion that it happens. The disadvantage of the limited-adversary approach is that the tight coupling between the two main components of the algorithm makes it difficult to modify (and to me, understand) each of them independently.

Very recently, in work completed during the recent Institute program on Data Structures and Optimization for Fast Algorithms, Chen, Kyng, Liu, Meierhans, and Probst Gutenberg [CKLMP'23] gave a deterministic data structure for computing a min-ratio cycle in a changing graph which works against a fully adaptive adversary, decoupling the IPM completely from the data structure. Combined with an insight of Brand, Liu, and Sidford (2023), this yields enough additional flexibility to produce an almost-linear time min-cost flow algorithm which works on incremental graphs — i.e., which can efficiently maintain a solution on a graph to which edges are being added over time.

Many new ideas are in [CKLMP'23]. Perhaps the main one is to use a different graph-theoretic object known as an oblivious routing scheme to parameterize the min-ratio cycles in the auxiliary graph. These are closely related to low-stretch trees by the work of Räcke (2008), who showed that a distribution over low-stretch trees yields an oblivious routing scheme. Whereas [CKLPPS'22] was able to maintain only a small subsample of such trees, [CKLMP'23] dynamically maintains the entire routing scheme, which turns out to be much more robust and enables the fully adaptive data structure. Doing this requires building on other recent results, such as an efficient parallel algorithm for oblivious routing due to Rozhoň et al. (2022), which they dynamize, and a fully dynamic algorithm for all-pairs shortest paths concurrently due to a subset of the authors. The details are considerably difficult.

This is not the end of the story. The analysis of the original Spielman-Teng solver was over 140 pages long. Digesting and simplifying it led to many algorithmic and mathematical advances over the past two decades, and the simplest Laplacian solver at the moment due to Kyng and Sachdeva (2016) is less than 20 pages long and much more efficient. The same remains to be done for max flow.

Thanks to Rasmus Kyng and Sushant Sachdeva for helpful conversations.

,