Theory at the Institute and Beyond, July 2023
by Venkatesan Guruswami (Simons Institute)
Summer is typically a quiet time on university campuses, but not at the Simons Institute, where two programs — one on Analysis and TCS, and another on Quantum Computing — are buzzing along. One might recall that one of the inaugural programs hosted by the Simons Institute, back in Fall 2013, was Real Analysis in Computer Science. In the decade since, the field has cultivated influential new themes such as global hypercontractivity and spectral independence, incorporated methods based on high-dimensional expanders and stochastic calculus, and also enabled striking applications in hardness of approximation, Markov chain analysis, and coding theory. All this progress makes this an excellent time to reconvene a program on this topic. The Quantum Computing program has a special focus on the power of noisy intermediate-scale quantum (NISQ) devices, a subject of great current practical interest aimed at demonstrating quantum advantage with noisy devices (pre-quantum error correction), with unique challenges for and great synergy with theory.
These programs come on the heels of a very busy spring semester that hosted a program on Meta-Complexity, which I wrote about earlier, and an extended reunion on the theory and practice of Satisfiability. The participants in the latter program were exposed to both the theoretical and the practical aspects of SAT solving in parallel, and especially for junior researchers, getting such a perspective early in their careers provides an unparalleled platform from which to embark on interdisciplinary and high-impact research.
Generating primes, almost deterministically
One of the papers to come out of the Meta-Complexity program, co-authored by a team of five full-time participants in the program (Chen, Lu, Oliveira, Ren, and Santhanam), gives a randomized algorithm that on infinitely many inputs n, runs in poly(n) time and with high probability generates a canonical n-bit prime. A randomized polynomial-time algorithm to generate some n-bit prime is easy, as one can just sample O(n) random n-bit numbers, test them for primality, and output a prime among them. But such an algorithm can output different primes on different runs. A deterministic algorithm outputting a single prime always would be desirable, but such an algorithm remains elusive. A pseudodeterministic algorithm is an intriguing middle ground and is a randomized algorithm that on any input outputs a unique answer with high probability. Motivated by the question of generating canonical primes, the concept of pseudodeterministic algorithms was introduced by Gat and Goldwasser in 2011 and has since received much attention. But a pseudodeterministic polynomial-time algorithm for prime generation remained open, and the present work solves it modulo the caveat of only working infinitely often. Actually, the result has nothing to do with generating primes per se, and works for any property of numbers that occurs sufficiently often and that can be checked in polynomial time (both of which hold for primality).
A few years ago, a subset of the present authors gave a subexponential (i.e., exp(n0.01)) pseudodeterministic algorithm for generating primes (again only for infinitely many lengths). This was based on a win-win argument that converts a conditional hardness-randomness trade-off (a specific uniform version due to Trevisan and Vadhan) into an unconditional pseudodeterministic algorithm. Namely, if a certain PSPACE-complete language L that they construct is not in BPP, then one can build a pseudorandom set of subexponential size that fools polynomial-time randomized algorithms (infinitely often). So in this case, one can derandomize the trivial randomized algorithm to generate primes and get a deterministic subexponential-time algorithm. On the other hand, if L is in BPP, then the polynomial-space algorithm that searches over all n-bit numbers to find the lexicographically smallest prime yields a polynomial-time pseudodeterministic algorithm.
The high-level idea in the new work is to bootstrap this win-win argument, by relying on recent uniform hardness-randomness trade-offs due to Chen and Tell that apply to generic computations computable by uniform circuits of bounded depth and size (as opposed to only the special language L). In each recursive bootstrapping step, one either obtains a pseudodeterministic polynomial-time construction of primes, or a significantly faster deterministic construction of primes of a larger input length. Intuitively, after O(log n) steps of recursion, one might hope that at least one of the steps leads to a polynomial-time pseudodeterministic algorithm at the input length considered at that step. This plan is cleverly executed in the paper, after overcoming some significant technical challenges. One of these is dealing with the fact that the Nisan-Wigderson generator used in the Chen-Tell approach does not have optimal parameters at all levels of hardness that are encountered in the recursion, so the Shaltiel-Umans generator based on Reed-Muller codes (a personal favorite of mine) is employed after suitable embellishments.
Solving integer linear programs faster
An integer linear program (ILP) asks, given an m x n matrix A and m-dimensional vector b with rational entries, whether there is a vector with integer coordinates such that Ax ≤ b (inequality between vectors is considered coordinate-wise). There are easy ILP formulations for many NP-hard problems so the problem is NP-hard. However, when the dimension (number of variables n) is fixed, ILP can be solved in time polynomial in the input size S, by a beautiful result from 1983 due to Lenstra, who gave an algorithm with runtime exp(O(n3)) poly(S). Later Kannan in 1987 improved the runtime to nc n poly(S) for some absolute constant c. While there have been improvements to this exponent c, the best-known dependence on n remained nO(n). A long-standing question has been whether this can be improved to exp(O(n)) (which would be best possible under the exponential-time hypothesis).
Finally, in 2023, we almost have an answer in the affirmative! A recent paper by Reis and Rothvoss (almost) settles the so-called subspace flatness conjecture due to Kannan and Lovász from 1988. When this result is plugged into an integer programming algorithm due to Dadush from 2012, one achieves a runtime dependence of (log n)O(n) on the dimension n, a vast improvement to the classic results!
The original work of Lenstra is based on a nontrivial dichotomy exhibited by any convex body K in n-dimensions: Either K has some structure (namely a small covering radius) that implies an easy-to-find lattice point (one with integer coordinates), or it is flat and squeezed into a small width w in some direction y that can further be efficiently found. In the latter case, one can ascertain whether K has a lattice point by recursing on the intersection of K with a small number (about w) of hyperplanes perpendicular to y. The width bound w established is poly(n), so unwinding the recursion to n levels leads to an nO(n) runtime.
The recursion onto one lower dimension works nicely as it is straightforward to enumerate all lattice points in a one-dimensional body (this is just listing integers in a small interval). However, it leads to a recursion depth of n. Dadush’s algorithm works by allowing the elimination of multiple dimensions in a single recursive step. The simple enumeration of integers in an interval is then replaced by the more complicated task of enumerating lattice points in a d-dimensional integral projection PK of K (integral projection means P is a d x n matrix with integer entries). If every translate of PK has at most Md lattice points — a condition that is implied by the volume of PK being at most O(M)d when the covering radius of K is not too small — an algorithm by Dadush, Peikert, and Vempala shows the enumeration of lattice points within PK can be done in poly(Md) time. Plugging this into the recursive approach to find a lattice point in the original n-dimensional body K, one would spend O(M) amortized time per dimension (assuming that the projection P can also be efficiently found), leading to an MO(n) algorithm for ILP.
The key then is to be able to find a projection with low volume. This is what the Kannan-Lovász subspace flatness conjecture asserts: if a convex body K avoids all lattice points, then for some d, there is a d-dimensional integral projection PK whose volume is at most O(log n)d. (The original Lenstra approach considered was restricted to d=1, and could only demonstrate poly(n) width along a single direction.) Reis and Rothvoss establish the conjecture with a bound (log n)3 instead of O(log n), along with an algorithm to find the claimed projection in exp(O(n)) time. Their proof is based on the beautiful reverse Minkowski theorem of Regev and Stephens-Davidowitz, which already implied subspace flatness with a factor of (log n)1.5 for the special case of ellipsoids. The poly(log n) bound on flatness works just as well for the above-discussed recursive ILP algorithm in order to achieve a (log n)O(n) runtime dependence on the dimension n.
It is known that a width of log n is necessary for the subspace flatness conjecture, so unfortunately there does not seem to be an avenue to push this direction to the still-elusive 2O(n) bound for ILP. Who knows — perhaps there is some fine-grained complexity lower bound precluding such a runtime!
An old code comes good for challenges old and new
The Reed-Muller codes are one of the oldest code constructions, with the binary version dating back to 1954. A Reed-Muller code treats the message as coefficients of a low-degree multivariate polynomial, and encodes it via the evaluations of the polynomial at all tuples in the field. In particular, binary Reed-Muller codewords are just truth tables of low-degree Boolean functions. The Reed-Muller code has a rich structure that makes it very attractive in coding theory, as well as in applications in computational complexity and cryptography. Let me briefly mention two striking recent works concerning Reed-Muller codes, one on the classic problem of achieving Shannon capacity, and another a beautiful application in cryptography (to private information retrieval).
It has long been believed (reportedly since at least the late 1960s) that this old family of codes gives a solution to the old and defining problem of coding theory — achieving Shannon capacity. Restricting to the binary symmetric channel for simplicity, the conjecture was that in a binary Reed-Muller code of rate R, if one transmits a Reed-Muller codeword c and it is distorted to r by flipping each bit with probability p that satisfies h(p) < 1-R (h(p) being the binary entropy function at p), then c will be the closest codeword to r with high probability. Thus, outputting the nearest codeword (which is the maximum likelihood estimate of the transmitted codeword) will result in successful error correction. A remarkable recent paper by Abbe and Sandon proves this conjecture, thus establishing that Reed-Muller codes under maximum likelihood decoding (MLD) achieve Shannon capacity. It is important to caution that MLD is not an efficient decoding algorithm, so this does not lead to an efficient way to achieve Shannon capacity (in the sense that polar codes do, for example). However, it shows that Reed-Muller codes are an explicit family that achieves capacity in the information-theoretic sense of Shannon’s original work (which relied on MLD for random codes).
The Abbe-Sandon paper gives a nice account of the rich history of the problem. A major progress was achieved in prior work of Reeves and Pfister — also blogged about in an earlier issue of this column — showing that Reed-Muller codes achieve vanishing bit-error probability at rates below capacity. That is, any specific bit of the codeword can be recovered correctly, except with probability ε = o(1) (by symmetry of the code, this probability will be the same for all bits). However, ε was not small enough to convert this to an algorithm that recovers the full codeword correctly with small block-error probability. (Reed-Muller codes have poor minimum distance, so only correct a very small fraction of worst-case errors, and ε was much bigger than that.)
The Abbe-Sandon approach starts off by noting an information-theoretic fact that holds in any code where all positions are symmetric: maximum likelihood decoding of any codeword bit based on the remaining bits gives a “coarse guess” that offers a small advantage over random guessing of that bit. The next steps, which form the crux of their approach, amplify the success probability of guessing a bit to 1-o(1) by using a “sunflower” of subspaces. This is achieved by combining several coarse guesses for a bit that are obtained courtesy of the Reed-Muller subcodes restricted to those subspaces. (Reed-Muller codes have a nice algebraic structure wherein, restricted to any subspace, they remain a Reed-Muller code of smaller dimension. This feature is at the heart of most of their powerful and widely useful properties, including local decodability and testability.) Given the ongoing Analysis semester at the Institute, it is worth highlighting that Fourier analysis plays a crucial role in this step, and indeed is the core technical ingredient in the overall approach. The o(1) bit-error probability is still not small enough for a simple union bound over all bits, but it is strong enough to be combined with known list-decoding estimates to achieve small block-error probability as well.
As mentioned above, this does not give an efficient algorithm to achieve capacity, which remains open, and would be quite remarkable if possible. The base information-theoretic step (to get a crude guess on each bit) and the final list-decoding argument are inefficient. Interestingly, the middle step of boosting the error probability using “subspace sunflowers” is efficient, given the crude estimates from the base step.
Reed-Muller codes recently found a spectacular and surprising use in a more “modern” application, namely private information retrieval (PIR), and building upon it, fully homomorphic computation for RAM computations. PIR is an intriguing notion where a client is able to retrieve any desired piece of information from a server’s database privately, without the server learning anything about which piece the client was interested in. A trivial, highly inefficient way to achieve this would be to download the full database. Under standard cryptographic hardness assumptions, PIR schemes where the client’s communication and computation are polylogarithmic in the database size have been long known. However, these schemes require the server to access the full database and run some computation on its entire contents to answer each query, which is clearly impractical. (In fact, if the server did not access some location, it would know that the client wasn’t interested in that particular location, violating the strong privacy requirement on the client side.) Doubly efficient PIR (DEPIR) aims to get around this limitation and achieve low communication and server computation simultaneously, by allowing the database DB to be preprocessed into some static data structure DB’ that is stored on the server. The hope would be that the server would only need to access a sublinear, ideally polylogarithmic, number of locations in the data structure DB’ during each PIR execution. It was, however, far from clear whether such a scheme could be realized.
One of the papers that won a best paper award at the just concluded STOC gave a really pleasing solution to this challenge. The authors, Lin, Mook, and Wichs, orchestrate a beautiful marriage of algebraically nice homomorphic encryption schemes with a preprocessing scheme to enable fast multivariate polynomial evaluation due to Kedlaya and Umans.
Here is a very high-level description of the paper’s approach. The indices into the database are interpreted as tuples in a field, and the database entries are viewed as evaluations of a multivariate polynomial P at those indices. The client sends the encryption Enc(t) of the tuple t corresponding to the index of their interest. The server would then use the homomorphic encryption scheme to compute the encryption of the evaluation P(t) (which is the database entry the client is interested in). The algebraic niceness of the homomorphic scheme now kicks in to ensure that Enc(P(t)) = P(Enc(t)). Thus, all that the server needs to do is to evaluate the polynomial P at Enc(t). Or in other words, compute a specific position in the Reed-Muller encoding of P! Naively, this would require accessing the full polynomial P, but the preprocessing scheme allows one to do this much more efficiently. The local decodability of Reed-Muller codes has found many uses in complexity theory such as PCP constructions, worst-case to average-case reductions, and constructions of extractors and pseudorandom generators (including the Shaltiel-Umans generator mentioned in the pseudodeterministic primality section). But here, it is a form of local encodability of Reed-Muller codes, given suitable preprocessing of the message polynomial, that drives this stunning application. The Reed-Muller code is the gift that keeps on giving.