New Developments in Indistinguishability Obfuscation (iO)
by Boaz Barak (Harvard University)
A version of this article appeared in the blog Windows on Theory.
For many of the famous open problems in theoretical computer science, most researchers agree on what the answer is, but the challenge is to prove it. Most complexity theorists (with few notable exceptions) believe that P ≠ NP, but we don’t know how to prove it. Similarly, most people working on matrix multiplication believe that there is an Õ(n²) algorithm for this problem, but we’re still stuck at 2.3728596. We believed that primality checking has a deterministic polynomial-time algorithm long before it was proved, and we still believe that the same holds for polynomial identity testing.
The story of cryptographic obfuscation is different. In 2001, we (Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan, and Yang) showed that what is arguably the most natural definition of obfuscation is impossible to achieve. That paper explored a number of obfuscation-related questions and in particular left as an open question the existence of so-called indistinguishable obfuscators, or iO. After that, there were arguably more negative than positive results in obfuscation research, until in 2012, by extending some of the ideas behind fully homomorphic encryption, Garg, Gentry, and Halevi gave a heuristic construction of multilinear maps, which one can think of as “Diffie-Hellman on steroids” (or maybe LSD). Then in 2013, Garg, Gentry, Halevi, Raykova, Sahai, and Waters (GGHRSW) built on top of these maps to give a heuristic construction of iO.
The GGHRSW paper opened the floodgates to many papers using iO to achieve many long-standing cryptographic goals, as well to show that iO provides a unified approach to solving many classic cryptographic problems. The fact that so many goals were achieved through heuristic constructions was not very comforting to cryptographers. Even less comforting was the fact that several cryptographic attacks were discovered on these heuristic constructions. The years that followed saw a sequence of constructions and breaks, giving cryptographers a kind of "emotional whiplash." Everyone agreed that iO would be amazing if it existed, but whether it actually existed depended on whom you asked and what paper in the ePrint Archive they read that morning.
The holy grail in this line of work is to base obfuscation on a standard assumption and, ideally, Regev’s learning with errors (LWE) assumption. Of course, we don’t know that LWE is true (in particular, LWE implies P ≠ NP); but if it's false, it would bring down so much of the field that cryptographers might as well pack their bags and do machine learning (or try to sabotage progress in quantum computing, since the only other standard assumptions for public-key cryptography are broken by fully scalable quantum computing).
We have not yet achieved this holy grail (this is only the fourth season), but as described in this Quanta Magazine article by former Simons Institute Journalist in Residence Erica Klarreich, there has been remarkable progress in the last few months. In particular, Jain, Lin, and Sahai (JLS) (building on a long sequence of works by many people, including Ananth, Matt, Tessaro, and Vaikuntanathan) obtained iO based on LWE and several standard assumptions in cryptography. This work, which was partially conducted at the Simons Institute during the Spring 2020 program on Lattices: Algorithms, Complexity, and Cryptography, is arguably the first “heuristic-free” construction and is a fantastic breakthrough. However, there is still work to do — the JLS construction uses not just LWE but also a variant of it that is not as well studied. It is also based on pairing-based cryptography. This is an area that has thousands of papers but for which known instantiations can be broken by quantum computers. However, there is yet more hope — in another sequence of works, by Agrawal; Brakerski, Döttling, Garg, and Malavolta; Wee and Wichs; and Gay and Pass, a construction of iO was achieved that is “almost” heuristic free. It still uses one heuristic assumption (circular security) but has the advantage that apart from this assumption it relies only on LWE.
One can hope that in the next season, these two lines of work will converge to give a construction of iO based on LWE, achieving a “meta-theorem” deriving from LWE a huge array of cryptographic primitives.
Want to learn more about these amazing advances? Want to know what's next in store for iO?
Fortunately, there was a highly successful Simons Institute workshop on the topic this month. Authors of all the papers mentioned in this article joined together in coordinated presentations to give a unified view of the field and the challenges ahead. The workshop also featured a historical overview by Yael Kalai, as well as a talk by Benny Applebaum on the computational assumptions used, followed by a panel discussion with Yael, Benny, and Chris Peikert. Finally, as with every proper crypto event, there was a rump session — though given the remote format, participants had to bring their own beer.
You can watch the workshop talks here.