Theory at the Institute and Beyond, October 2020

by Prasad Raghavendra (Simons Institute)

The world of cryptography saw a fundamental breakthrough this August, the beginning of an end for a very exciting period in the area of cryptography, one that began with the construction of candidate indistinguishability obfuscation schemes by Garg et. al. in 2013. 

The goal of program obfuscation is to scramble a computer program so as to hide the implementation details while maintaining the functionality. A strong form of obfuscation is “virtual blackbox security,” which requires that the scrambled program function as a black box to the attacker; in other words, anything that an attacker can glean from the scrambled program, they can also learn from just the input-output pairs. It has been known since the work of Barak et. al in 2001 that obfuscation under this strong yet very natural notion of security is impossible.

Indistinguishability obfuscation is a weaker notion wherein if two programs P1 and P2 are of the same functionality, then the output of the scrambler on the two programs is computationally indistinguishable. In a groundbreaking work, Garg et. al. constructed the first indistinguishability obfuscation scheme, using the construction of a primitive known as multilinear maps.

Barak terms obfuscation the cryptographer’s “master tool” in his survey on the topic, since almost any cryptographic primitive that one can envisage can be derived from obfuscation. This includes basic notions like public-key encryption, digital signatures, and multiparty computations, as well as fancier ones like homomorphic encryption, witness encryption and functional encryption. 

Indistinguishability obfuscation (iO) generated tremendous excitement in cryptography, opening up possibilities for primitives that were seemingly out of reach, and at the same time yielding unified constructions for existing ones. Indistinguishability obfuscation has even been applied toward showing the intractability of PPAD-complete problems. At this time, the original candidate construction of iO by Garg et. al. has been cited 1100+ times within the span of a few years.

Unfortunately, all previously known iO constructions required new hardness assumptions that were postulated specifically for showing the security of the iO schemes proposed. Indeed, there was a line of work breaking some of these new hardness assumptions and the corresponding iO schemes. Whether there exist iO schemes the security of which is based on widely accepted standard hardness assumptions was perhaps the most central problem in cryptography in recent years. 

Jain, Lin and Sahai proposed an iO scheme that is based on well-founded hardness assumptions that have received considerable attention in earlier work in complexity theory, cryptography and number theory.  This places iO on a solid footing, resolving affirmatively the question of whether iO is possible — a major advance for cryptography. 

Earlier in the year, Brakerski et al. proposed a new (albeit still heuristic) approach to realizing iO starting with a special kind of fully homomorphic encryption scheme. Building on this work (around the same time as the above work of Jain et al.), Gay and Pass, Brakerski et al. and Wee and Wichs showed how to realize provably secure schemes based on a new variant of the popular circular security assumption used for constructing fully homomorphic encryption. One benefit of this approach is that it offers iO with post-quantum security. 

In light of these exciting developments in obfuscation, the Simons Institute will be hosting a workshop on this topic in December.

Learn calculus again?
An eminent theorist once told me how one could be a theory researcher in the late seventies without being familiar with basic probability. As probability theory started making inroads into algorithms and complexity, some shared the sentiment that it was a fad that wouldn’t last. In the words of this senior theorist, it was actually the theoreticians who did not learn probability theory who did not last. Today, it is nearly impossible to be a theory graduate student without being well versed in basic probability.

Every few years a new set of techniques from a different area of mathematics makes its way into TCS. Every time this happens, it catalyzes a mini-revolution in some branch of theory. Finite field algebra and coding theory fuelled the revolution that gave us IP=PSPACE and the PCP theorem. Analysis of boolean functions in the work of Håstad paved the way for optimal inapproximability results. The theory of metric embeddings changed the way we approach graph partitioning and nearest-neighbor search. During my graduate school days in the 2000s, it was hard not to pick up some additive combinatorics, ideas that found their way into property testing, coding theory and hardness of approximation. In the last decade, spectral graph theory and continuous optimization techniques such as interior point methods have led to faster algorithms for classic problems like maximum flow and minimum bipartite matching. Mathematical notions of “real-stable polynomials” and “sum-of-squares proofs” are among the recent additions to this growing list.

It is astounding to see how quickly new techniques become common knowledge among theorists, especially among every incoming new generation of graduate students. This characteristic of theoretical computer science is one of the many factors that makes it endearing to me.

The latest entrant to the theorist’s toolkit that is getting increasingly hard to ignore is stochastic calculus.  

At a high level, one could say that calculus is an approach to understanding and quantifying change. To understand how a function F changes from point A to point B, one breaks up the path from A to B into infinitesimal segments and quantifies the change along these segments (a.k.a. differentiation), and puts together these infinitesimal changes to obtain the total change (a.k.a integration). The infinitesimal changes (a.k.a. derivative) are often simpler than the original function and are easier to reason with. Even from an algorithmic standpoint, choosing the infinitesimal changes is arguably much simpler, and this is why gradient descent over a continuous domain is easily the most widely used and powerful algorithmic primitive.

Stochastic calculus brings these ideas of differentiation and integration into the domain of random processes like random walks in Euclidean space. In the context of random processes, the correct notions of differentiation and integration substantially differ from classical calculus. Intuitively, for a traditional function F, its change from one time to another is the sum of infinitesimal changes F’ dt, where F’ is the derivative of function F and dt is the infinitesimal time increment. For a random process F, its change involves not only a “drift” term dt, but also a random infinitesimal change of magnitude $\sqrt{dt}$. While higher powers (dt)2 , (dt)3 .. of the infinitesimal dt are negligible, $\sqrt{dt}$ and ($\sqrt{dt}$)2 are both non-negligible. It is for this reason that the rules of differentiation and integration for random processes are fundamentally different.

Evidently, continuous random walks arise naturally in the problem of computing volume of a convex set or sampling a random point in it. Surprisingly, stochastic calculus is useful even in the context of discrete random variables. For example, suppose we would like to generate a single random bit. One could break up the process of generating an entire random bit into a random process. For example, start a random walk on the interval [0,1] starting at ½, and output 0 or 1 depending on whichever end the walk reaches first. Thus, the choice of a single random bit is broken into infinitely many infinitesimal random choices of the continuous random walk. In principle, given a boolean function F, one can reason about F(x) at a random boolean x, by quantifying the change along each of the infinitesimal random choices. 

There has been a flurry of recent work that relies on continuous random processes and stochastic calculus. Lovett and Meka used Brownian motion to devise an algorithm for discrepancy minimization, yielding a constructive proof of the classical Spencer’s result. The paradigm of generating random bits by iteratively adding in tiny increments is at the essence of the new approach to constructing pseudorandom generators (PRGs) via polarizing walks. This same paradigm carried out using stochastic calculus has been instrumental in settling a long-standing conjecture of Talagrand in analysis of boolean functions (see here).

Stochastic calculus has been used to significantly simplify the oracle separation between the complexity classes BQP and PH. New rounding schemes for the classical MaxCut semidefinite program have been devised using Brownian motion (see here and here). 

“Stochastic localization” is a random process devised by Eldan that yields a decomposition of a probability measure into a mixture of simpler measures that are localized in the sense of being concentrated on smaller subsets of original space. The process is a beautiful construction whose analysis relies on stochastic calculus. The technique of stochastic localization has already found numerous applications, most notably with progress on the long-standing Kannan-Lovász-Simonovits conjecture from convex geometry (see Lee-Vempala). As part of the Simons Institute program on Probability, Geometry, and Computation in High Dimensions this semester, a reading group is devoted to stochastic localization and all its applications to statistical physics and combinatorics of random graphs. 

Often, I remark to beginning graduate students what a genuine pleasure it is to be at their stage of career, for they learn all the classics for the very first time. Most results that one learns later in one’s career are derived and less fundamental. To make my point, I would often say that you only get to learn calculus once. Well, here is an opportunity to learn it again.

,