No abstract available.

### Monday, June 14th, 2021

In this talk, we will describe different variants of the NTRU problem, and study how they compare to each other (and to other more classical lattice problems) in terms of reduction. More precisely, we will show that one of the search variant of the NTRU problem is at least as hard as the shortest vector problem (SVP) in ideal lattices; and that the decisional variant of NTRU is at least as hard as another search variant of NTRU. Unfortunately, the two search variants of NTRU that are considered in these reductions do not match, meaning that we cannot combine the reductions in order to obtain a reduction from the ideal shortest vector problem to the decisional NTRU problem.

This is a joint work with Damien Stehl√©.

Lattice-based algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this work, we observe that these formulations may discard non-linear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short.

We formalize lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. We apply our techniques to lattice-based approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in side-channel attacks, and demonstrate that our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches. We carried out extensive experiments using our estimation and solving framework. Joint work with Martin Albrecht that was launched thanks to the lattice semester.

We construct the first decentralized multi-authority attribute-based encryption (MA-ABE) scheme for a non-trivial class of access policies whose security is based (in the random oracle model) solely on the Learning With Errors (LWE) assumption. The supported access policies are ones described by DNF formulas. All previous constructions of MA-ABE schemes supporting any non-trivial class of access policies were proven secure (in the random oracle model) assuming various assumptions on bilinear maps.

In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as a standard ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any DNF formulas over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In terms of efficiency, when instantiating the scheme with a global bound s on the size of access policies, the sizes of public keys, secret keys, and ciphertexts, all grow with s.

Technically, we develop new tools for building ciphertext-policy ABE (CP-ABE) schemes using LWE. Along the way, we construct the first provably secure CP-ABE scheme supporting access policies in NC1 that avoids the generic universal-circuit-based key-policy to ciphertext-policy transformation. In particular, our construction relies on linear secret sharing schemes with new properties and in some sense is more similar to CP-ABE schemes that rely on bilinear maps. While our CP-ABE construction is not more efficient than existing ones, it is conceptually intriguing and further we show how to extend it to get the MA-ABE scheme described above.

Authors: Fabrice Benhamouda and Aayush Jain and Ilan Komargodski and Huijia Lin

Abstract: Motivated by the goal of designing versatile and flexible secure computation protocols that at the same time require as little interaction as possible, we present new multiparty reusable Non-Interactive Secure Computation (mrNISC) protocols. This notion, recently introduced by Benhamouda and Lin (TCC 2020), is essentially two-round Multi-Party Computation (MPC) protocols where the first round of messages serves as a reusable commitment to the private inputs of participating parties. Using these commitments, any subset of parties can later compute any function of their choice on their respective inputs by just sending a single message to a stateless evaluator, conveying the result of the computation but nothing else. Importantly, the input commitments can be computed without knowing anything about other participating parties (neither their identities nor their number) and they are reusable across any number of desired computations.

We give a construction of mrNISC that achieves standard simulation security, as classical multi-round MPC protocols achieve. Our construction relies on the Learning With Errors (LWE) assumption with polynomial modulus, and on the existence of a pseudorandom function (PRF) in NC1. We achieve semi-malicious security in the plain model and malicious security by further relying on trusted setup (which is unavoidable for mrNISC). In comparison, the only previously known constructions of mrNISC were either using bilinear maps or using strong primitives such as program obfuscation.

We use our mrNISC to obtain new Multi-Key FHE (MKFHE) schemes with threshold decryption:

‚àô In the CRS model, we obtain threshold MKFHE for NC1 based on LWE with only polynomial modulus and PRFs in NC1, whereas all previous constructions rely on LWE with super-polynomial modulus-to-noise ratio.

‚àô In the plain model, we obtain threshold levelled MKFHE for P based on LWE with polynomial modulus, PRF in NC1, and NTRU, and another scheme for constant number of parties from LWE with sub-exponential modulus-to-noise ratio. The only known prior construction of threshold MKFHE (Ananth et al., TCC 2020) in the plain model restricts the set of parties who can compute together at the onset.

We analyze a new trade-off between the running time and output quality of slide reduction. We show that exploiting this trade-off can be favorable in practice and yields variants that are competitive with state-of-the-art reduction algorithms. Since slide reduction offers some straight-forward parallelization, this could allow larger scale lattice reduction attacks in practice. As a side result we obtain sharper bounds on the polynomial running time (in the query model) for slide reduction and its generalization, block-Rankin reduction.

Eisenbrand and Venzin recently showed that the fastest algorithm for (large) constant-factor-approximate SVP and CVP in the L_2 norm can be made to work for all L_p norms. Their techniques are simple and elegant---the kind of proof that's surprising at first and then obvious in hindsight.

In follow-up work with Aggarwal, Chen, Kumar, and Li, we use their techniques to produce generic reductions, showing that any algorithm for SVP or CVP in one L_p norm yields algorithms in different L_p norms with only a constant factor loss in the approximation factor and a 2^{eps n} factor loss in the running time. (The reduction moves from larger p to smaller p for SVP and in the opposite direction for CVP.)

In my eyes (and perhaps only my eyes) all of this yields two surprising conclusions:

1) Lattice problems in different L_p norms are far more closely related than I originally thought.

2) If we accept the complexity-theoretic assumptions behind the recent fine-grained lower bounds for CVP in L_p norms (with Aggarwal, Bennett, and Golovnev), then the time-approximation factor tradeoff for CVP in L_p norms must behave... strangely.

### Tuesday, June 15th, 2021

The talk will be on the recent result on new lower bounds on the costs of solving the nearest neighbor search problems appearing in cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker--Ducas--Gama--Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of $2^{0.292d + o(d)}$ is optimal. For the Hamming metric we derive new lower bounds for nearest neighbor searching which almost match the best upper bounds from the literature [May--Ozerov, Eurocrypt 2015]. As a consequence we derive conditional lower bounds on decoding attacks, showing that also here one should search for improvements elsewhere to significantly undermine security estimates from the literature.

This is the joint work with T.Laarhoven that has been initiated at the Spring 2020 program ‚ÄúLattices: Algorithms, Complexity, and Cryptography.‚Äù

The dual attack has long been considered a relevant attack on lattice-based cryptographic schemes relying on the hardness of learning with errors (LWE) and its structured variants. As solving LWE corresponds to finding a nearest point on a lattice, one may naturally wonder how efficient this dual approach is for solving more general closest vector problems, such as the classical closest vector problem (CVP), the variants bounded distance decoding (BDD) and approximate CVP, and preprocessing versions of these problems. While primal, sieving-based solutions to these problems (with preprocessing) were recently studied in a series of works on approximate Voronoi cells, for the dual attack no such overview exists, especially for problems with preprocessing. With one of the take-away messages of the approximate Voronoi cell line of work being that primal attacks work well for approximate CVP(P) but scale poorly for BDD(P), one may wonder if the dual attack suffers the same drawbacks, or if it is a better method for solving BDD(P).

In this work we provide an overview of cost estimates for dual algorithms for solving these ''classical'' closest lattice vector problems. Heuristically we expect to solve the search version of average-case CVPP in time and space 2^{0.293d+o(d)}. For the distinguishing version of average-case CVPP, where we wish to distinguish between random targets and targets planted at distance approximately the Gaussian heuristic from the lattice, we obtain the same complexity in the single-target model, and we obtain query time and space complexities of 2^{0.195d+o(d)} in the multi-target setting, where we are given a large number of targets from either target distribution. This suggests an inequivalence between distinguishing and searching, as we do not expect a similar improvement in the multi-target setting to hold for search-CVPP. We analyze three slightly different decoders, both for distinguishing and searching, and experimentally obtain concrete cost estimates for the dual attack in dimensions 50 to 80, which confirm our heuristic assumptions, and show that the hidden order terms in the asymptotic estimates are quite small.

I plan to cover recent works on improving lattice point enumeration as a subroutine of blockwise lattice reduction.

- Lattice Reduction with Approximate Enumeration Oracles: Practical Algorithms and Concrete Performance

Martin R. Albrecht, Shi Bai, Jianwei Li and Joe Rowell.

CRYPTO 2021 https://eprint.iacr.org/2020/1260

- Faster Enumeration-based Lattice Reduction: Root Hermite Factor k^(1/(2k)) in Time k^(k/8 + o(k))

Martin R. Albrecht, Shi Bai, Pierre-Alain Fouque, Paul Kirchner, Damien Stehl√© and Weiqiang Wen

CRYPTO 2020 https://eprint.iacr.org/2020/707

Together these works reduce the cost of reaching root Hermite factor ~ GH(400)^(1/400) from ~ 2^254 enumeration nodes when solving an SVP instance to ~2^196 nodes inside BKZ (per block).

Lattice problems and techniques are quintessential in realizing advanced cryptographic primitives that involve complex computations over secret values. In this work, we depart from the lattice paradigm, and show that indistinguishability obfuscation can be constructed without lattices! We prove:

Theorem: Assume sub-exponential security of the following assumptions:

- the Learning Parity with Noise ($\mathsf{LPN}$) assumption over general prime fields $\mathbb{Z}_p$ with polynomially many $\mathsf{LPN}$ samples and error rate $1/k^\delta$, where $k$ is the dimension of the $\mathsf{LPN}$ secret, and $\delta>0$ is any constant;

- the existence of a Boolean Pseudo-Random Generator ($\mathsf{PRG}$) in $\mathsf{NC}^0$ with stretch $n^{1+\tau}$, where $n$ is the length of the $\mathsf{PRG}$ seed, and $\tau>0$ is any constant;

- the Decision Linear ($\mathsf{DLIN}$) assumption on symmetric bilinear groups of prime order.

Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion-resistant public-key functional encryption for all polynomial-size circuits.

This removes the reliance on the Learning With Errors (LWE) assumption from our previous construction [Jain, Lin, Sahai STOC'21]. As a consequence, we obtain the first fully homomorphic encryption scheme that does not rely on any lattice-based hardness assumption. Our techniques feature a new notion of randomized encoding called Preprocessing Randomized Encoding (PRE) that, essentially, can be computed in the exponent of pairing groups. When combined with other new techniques, PRE gives a much more streamlined construction of $\iO$ while still maintaining reliance only on well-founded assumptions.

Joint work with Aayush Jain and Huijia Lin.

We study several strenthenings of classical circular security assumptions which were recently introduced in four new constructions of indistinguishabilty obfuscation from lattice-based primitives: Brakerski-D\"ottling-Garg-Malavolta (Eurocrypt 2020), Gay-Pass (STOC 2021), Brakerski-D\"ottling-Garg-Malavolta (Eprint 2020) and Wee-Wichs (Eurocrypt 2021).

We provide explicit counterexamples to the {\em $2$-circular shielded randomness leakage assumption} of Gay-Pass and the {\em homomorphic pseudorandom LWE samples} assumption of Wee-Wichs. Our work separates classical circular security of the kind underlying un-levelled fully-homomorphic encryption and the strengthened versions underlying recent iO constructions.

A proof system for NP satisfies post-quantum proof of knowledge property if the following holds: if for any quantum prover convincing the verifier, there exists an extractor that can extract the witness from the prover with probability negligibly close to the acceptance probability. First studied by the seminal work of Unruh [Eurocrypt'15], this notion has been adopted by follow-up works, most notably in the construction of proof quantum knowledge protocols for QMA.

The post-quantum proof of knowledge protocol constructed by Unruh suffered from the drawback that the prover's state is destroyed after the extractor extracts the witness. This is especially problematic if this protocol is composed with other protocols. A more desirable feature is that the prover's state essentially remains undisturbed even after extraction.¬†

We show how to achieve¬†post-quantum proof of knowledge from quantum hardness of learning with errors. Our result satisfies the above desirable property. We also show how to extend our result to the concurrent composition setting.¬†

Joint work with Kai-Min Chung and Rolando L. La Placa.

Abstract: In this talk we present an overview of two recent works where we construct:

- succinct non-interactive arguments (SNARGs) for polynomial-time computation; and

- non-interactive batch arguments (BARGs) for NP.

Our constructions rely on standard hardness assumptions.

This is joint work with Abhishek Jain and Zhengzhong Jin.

We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T‚ãÖpolylog(T) and space S‚ãÖpolylog(T), and the verifier runs in time n‚ãÖpolylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated, assuming a trusted setup, from the hardness of factoring (products of safe primes), or, without a trusted setup, using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform.

Our proof builds on DARK (B√ºnz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we:

1. Identify a significant gap in the proof of security of DARK.

2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way.

3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption).

In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.

### Wednesday, June 16th, 2021

We define and construct Deniable Fully Homomorphic Encryption based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption.

Our constructions achieve compactness independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the faking probability achieved by the scheme. This is in contrast to all previous constructions of deniable encryption schemes (even without requiring homomorphisms) which are based on polynomial hardness assumptions, originating with the seminal work of Canetti, Dwork, Naor and Ostrovsky (CRYPTO 1997) in which the ciphertext size grows with the inverse of the faking probability. Canetti et al. argued that this dependence ‚Äúseems inherent‚Äù, but our constructions illustrate this is not the case. We note that the Sahai-Waters (STOC13) construction of deniable encryption from indistinguishability-obfuscation achieves compactness and can be easily modified to achieve deniable FHE as well, but it requires multiple, stronger sub-exponential hardness assumptions, which are furthermore not post-quantum secure. In contrast, our constructions rely only on the LWE polynomial hardness assumption, as currently required for FHE even without deniability.

The running time of our encryption algorithm depends on the inverse of the faking probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability and polynomial encryption time. Yet, we believe that achieving compactness is a fundamental step on the way to achieving all properties simultaneously as has been the historical journey for other primitives such as functional encryption. Interestingly, we note that our constructions support large message spaces, whereas previous constructions were bit by bit, and can be run in online-offline model of encryption, where the bulk of computation is independent of the message and may be performed in an offline pre-processing phase. The running time of the online phase, is independent of the faking probability, whereas the offline encryption run-time grows with the inverse of the faking probability.

At the heart of our constructions is a new way to use bootstrapping to obliviously generate FHE ciphertexts so that it supports faking under coercion.

Joint work with Shafi Goldwasser and Saleet Mossel.

We construct a succinct non-interactive publicly-verifiable delegation scheme for any logspace uniform circuit under the sub-exponential Learning With Errors (LWE) assumption. For a circuit C : {0, 1}^N -> {0, 1} of size S and depth D, the prover runs in time poly(S), the communication complexity is D * polylog(S), and the verifier runs in time (D + N) * polylog(S). To obtain this result, we introduce a new cryptographic primitive: lossy correlation-intractable hash functions. We use this primitive to soundly instantiate the Fiat-Shamir transform for a large class of interactive proofs, including the interactive sum-check protocol and the GKR protocol, assuming the sub-exponential hardness of LWE. By relying on the result of Choudhuri et al. (STOC 2019), we also establish the sub-exponential average-case hardness of PPAD, assuming the sub-exponential hardness of LWE.

This is based on a joint work with Yael Tauman Kalai, Dakshita Khurana, and Rachel Zhang.

We construct hash functions that, assuming the hardness of LWE, securely realize the Fiat-Shamir transform (in the standard model) for the following rich classes of protocols:

- The parallel repetition of any ``commit-and-open'' protocol (such as the seminal Goldreich-Micali-Wigderson protocol for 3-coloring), using a natural choice of commitment scheme. Commit-and-open protocols are a ubiquitous paradigm for constructing general purpose public-coin zero knowledge proofs.

- The parallel repetition of any base protocol that (1) satisfies a stronger notion of soundness called round-by-round soundness, and (2) has an efficient procedure, using a suitable trapdoor, for recognizing ``bad verifier randomness'' that would allow the prover to cheat.

This results in non-interactive variants of all such protocols, and also proves that assuming LWE, the original interactive protocols cannot be zero knowledge. This leverages a connection due to Dwork-Naor-Reingold-Stockmeyer (FOCS '99) and resolves long-standing open questions about protocols such as GMW.

Our results are obtained by establishing a new connection between the Fiat-Shamir transform and list-recoverable codes. In contrast to the usual focus in coding theory, we focus on a parameter regime in which the input lists are extremely large, but the rate can be small. We give a (probabilistic) construction based on Parvaresh-Vardy codes (FOCS '05) that suffices for our applications.

This talk is based on joint work with Justin Holmgren and Ron Rothblum.