Workshop Report: Structural Results

Analysis and TCS_hi-res_logo

by Jordan Ellenberg (science communicator in residence, Summer 2023)

Some of the richest and most lasting mathematical ideas come about when two distinct fields come together and pay close attention to their intersections. That was the goal of the workshop on Structural Results held at the Simons Institute in July 2023, in which extremal combinatorists and theoretical computer scientists specializing in complexity convened to talk about the overlap between their two fields.

One highlight was Cosmin Pohoata’s talk about his recent breakthrough with Alex Cohen and Dmitrii Zakharov (both still graduate students!) on the Heilbronn triangle problem. It seems like a simple one: If you put n points in a unit square, what is the triangle of smallest area formed by any three of the points? The answer, of course, depends on the location of the points. The question Hans Heilbronn asked in the 1940s was: How large can the smallest triangle be? In other words, how can we arrange the n points so that no three of them form a small triangle? Your first thought might be to arrange the points so that no two of them came too close to each other. But that’s not good enough — if three points are very close to lying on a straight line, the triangle they form will be long but very skinny, and will still have a small area! Indeed, if you toss n points randomly in the square, the smallest triangle you find will typically have an area around 1/n3, but its diameter will be close to that of the square itself! Long skinny triangles are ubiquitous. Komlós, Pintz, and Szemerédi showed in 1981 that there has to be a triangle whose size is at most n-8/7, and there the problem stood for 40 years, until Pohoata and his collaborators broke the 8/7 barrier and showed that the smallest triangle actually has to be a shade smaller — its area is at most n-8/7 - 1/2000.

What do long skinny triangles have to do with computer science? Nothing, per se. But problems about sets of points in space are surprisingly applicable. Tselil Schramm’s talk, for instance (about her joint work with Liu, Mohanty, and Yang). The notion of an expander graph — a network in which a random walk disperses optimally quickly — is crucial to computer science, allowing advances in pseudorandomness, coding theory, and many other areas. Fascinatingly, for many years the situation was the following: it was known that random graphs were expanding with very high probability, but there was no specific family of graphs that could be proved to be expanding! Nowadays there has been an explosion of explicit expanders. But the situation is different in the newer field of high-dimensional expansion, in which instead of networks (which are composed of vertices, some pairs of which are connected by edges) one has complexes, in which the collections of vertices one keeps track of are not just pairs, but triples (or quadruples, or even larger groupings, but the work presented at the workshop was about triples). What Schramm and her collaborators have done is to show that two-dimensional expanders are plentiful (albeit with vertex degree somewhat higher than what one can do in the one-dimensional case), by showing that random constructions really do enjoy this property; in particular, you can take n points at random on a sphere whose dimension is about log n, and take the “edges” to be triples of points which are all within a small distance of each other. Small triangles again! (But this time, “small” in the sense of “all three vertices being mutually close to each other,” not “having small area.”)

What Pohoata’s and Schramm’s work have in common is the difficulty of going from 2 to 3: problems about pairs of points are well understood, but problems about triangles are drastically harder! (This phenomenon is familiar from dynamics, where the two-body problem is a simple matter of ellipses, while the three-body problem remains frustratingly unsolved and an active topic of research.) But where the two papers part ways is in their relationship with randomness. The point of Schramm’s work is to prove that a random configuration of points achieves maximum performance. In the Heilbronn triangle problem, randomly chosen points definitely do not make the smallest triangle maximally big; to get to the optimum, one will have to choose points very, very carefully, perhaps according to the dictates of some yet-undiscovered structure.

This dichotomy between structure and randomness was an ever-present theme during the workshop. We saw it in Julia Wolf’s talk about her joint work with Caroline Terry, which was about a new regularity result concerning subsets of the space of Boolean vectors (strings of 0’s and 1’s). One information-theoretic way in which such a subset A could be thought of as “simple” would be if membership in A depended only on a few coordinates. More generally, one can call A regular if there is some linear projection p to a much lower-dimensional space of Boolean vectors such that membership of v in A depends only on the image of v under p, which carries much less information. What Wolf and Terry show is that a different computational notion of tameness, called “k-stability,” implies “approximate regularity” — that there is some projection p such that the value of p(v) determines membership of v in A with high probability. Now we are getting still closer to questions of computational complexity! And the relation was made even more apparent in Shachar Lovett’s Richard M. Karp Distinguished Lecture on communication complexity. In this talk, the setup is that two players, Xavier and Yelena, are trying to compute the value of a function f(x,y) which takes values in {1,-1}. The function is known to both players, but the problem is that Xavier knows only the value of x and Yelena knows only the value of y. Of course they could share their private values and compute f; but the game here is to compute f(x,y) using as few bits of communication between Xavier and Yelena as possible. This complexity problem is visibly of the same flavor as the regularity question contemplated by Terry and Wolf; and here, too, the exciting work arises when we try to compare this notion of complexity with other notions of a more linear-algebraic flavor.

One of the high points of the week was the open-problem session held on the second afternoon of the workshop. In this informal setting, the pure mathematicians and computer scientists jumped to the board, interrogated each other, and laid out a whole spray of conundrums, some stemming directly from the talks we’d heard or were going to here, others from farther afield. The structure, shall we say, allowed for a certain amount of randomness.

,