### Tuesday, August 9th, 2016

We abstract the core of the bitcoin protocol, termed the bitcoin backbone in earlier work, now including the target recalculation mechanism, and we show that it can be instantiated to implement a robust transaction ledger. This is, to our knowledge, the first full analysis of the bitcoin protocol in a dynamic environment where the number of participants is allowed to evolve over time.

Joint work with Juan Garay and Nikos Leonardos.

Nakamoto’s famous blockchain protocol enables achieving consensus in a so-called *permissionless* setting—anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents “sybil attacks” (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. “moderately hard functions”) introduced by Dwork and Naor (Crypto’92).

Prior works that analyze the blockchain protocol either make the simplifying assumption that network channels are fully synchronous (i.e. messages are instantly delivered without delays) (Garay et al, Eurocrypt’15) or only consider specific attacks (Nakamoto’08; Sampolinsky and Zohar, FinancialCrypt’15); additionally, as far as we know, none of them deal with players joining or leaving the protocol.

We prove that the blockchain consensus mechanism satisfies a strong forms of consistency and liveness in an asynchronous network with adversarial delays that are a-priori bounded, within a formal model allowing for adaptive corruption and spawning of new players, assuming that the computational puzzle is modeled as a random oracle. (We complement this result by showing a simple attack against the blockchain protocol in a fully asynchronous setting, showing that the “puzzle-hardness” needs to be appropriately set as a function of the maximum network delay.)

As an independent contribution, we define an abstract notion of a blockchain protocol and identify appropriate security properties of such protocols; we prove that Nakamoto’s blockchain protocol satisfies them and that these properties are sufficient for typical applications. We finally show how to use our analysis to build *new* blockchain protocols that overcome some of the bottlenecks in Nakamoto’s original protocol.

The analysis of Nakamoto’s blockchain is based on joint work with Lior Seeman and abhi shelat, and new blockchain protocols are based on joint work with Elaine Shi. No prior knowledge of Bitcoin or the blockchain will be assumed.

The distributed systems and cryptography literature has traditionally focused on the "permissioned" model where protocol participants are known a priori. Bitcoin's rapid rise to fame represents an exciting breakthrough: it popularized a "permissionless" model where anyone can join and leave dynamically, and there is no prior knowledge of other participants.

Bitcoin's core protocol is commonly referred to as a "blockchain", which, roughly speaking, realizes a consensus abstraction ensuring consistency and liveness. Today's blockchain protocols, however, suffer from two main drawbacks that have given rise to vigorous debates in the community: 1) existing protocols have terrible performance; 2) existing protocols are not incentive compatible and selfish mining attacks are well-known.

In this talk, we present two latest results that address these painpoints, Fruitchain and Hybrid Consensus. Fruitchain is a new, game theoretically secure blockchain design that incentivizes honest behavior. Hybrid Consensus offers efficiency bootstrapping for permissionless consensus: we show how to leverage a slow blockchain protocol to bootstrap classical Byzantine Fault Tolerance protocols, such that we can achieve consensus in the permissionless setting while attaining the performance of their permissioned counterparts.

Joint work with Rafael Pass.

The Scrypt algorithm has been designed to be a sequential memory-hard function, i.e., the product of time and memory required to evaluate it should be large. Scrypt has found widespread applications in the context of password hashing (in fact, the same design methodology underlies Argon2d, one of the winning designs in the recent password-hashing competition).

However, providing an actual proof that Scrypt is indeed sequential memory-hard has been a challenging open problem. I will review the work initiated during the Simons program on providing provable guarantees for Scrypt.

Joint work with Joel Alwen, Binyi Chen, Chethan Kamath, Vladimir Kolmogorov, Krzysztof Pietrzak.

### Wednesday, August 10th, 2016

No abstract available.

Memory hard functions (MHFs), first explicitly introduced by Percival, are a promising key-stretching tool for password hashing because the cost of storing/retrieving items from memory is relatively constant across different computer architectures. Thus, in contrast to standard cryptographic hash functions (e.g., SHA256) the cost of computing an MHF cannot be significantly reduced by developing customized hardware (ASICs). More specifically, we want to ensure that any circuit evaluating multiple instances of the MHF has high amortized AT-complexity --- Area X Time/#instances. Data-Independent Memory Hard Functions (iMHFs) are an important variant of MHFs due to their greater resistance to side-channel attacks. An iMHF can be specified by a directed acyclic G specifying data-dependencies during computation. Due to the recently completed Password Hashing Competition we have many candidate iMHFs, but many of these iMHFs had not been analyzed until recently.

This talk will summarize recent results demonstrating that a combinatorial property called depth-robustness fully characterizes iMHFs with high amortized-AT complexity. We will also show that Argon2i, the winner of the password hashing competition, is defined using a directed acyclic graph G that is not depth-robust. The resulting attacks are practical for realistic settings of the Argon2i parameters.

The talk is based on joint work with Joel Alwen and Krzysztof Pietrzak.

We show how to garble a large database and then a sequence of adaptively and adversarially chosen RAM programs that query and modify the database in arbitrary ways. The security property is that the garbled database and programs reveal only the outputs of the programs when run in sequence on the database. The runtime, space requirements and description size of the garbled programs are proportional only to those of the plaintext programs and the security parameter. The construction is based on indistinguishability obfuscation for circuits and poly-to-one collision-resistant hash functions.

As an immediate application, our scheme is the first to provide a way to outsource large databases to untrusted servers, and later query and update the database over time in a private and verifiable way, with complexity and description size proportional to those of the unprotected queries.

Joint work with Ran Canetti, Yilei Chen and Justin Holmgren

In this talk I will present recent results on achieving perfect zero knowledge for Interactive Probabilistically-Checkable Proofs and Interactive Oracle Proofs.

An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible.

We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a "balls and bins" fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data.

We prove that for the OFFLINE case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs.

Joint work with Elette Boyle

We present simple, practical, and powerful new techniques for garbled circuits. These techniques result in significant concrete and asymptotic improvements over the state of the art, for several natural kinds of computations. For arithmetic circuits over the integers, our construction results in garbled circuits with free addition, weighted threshold gates with cost independent of fan-in, and exponentiation by a fixed exponent with cost independent of the exponent. For boolean circuits, our construction gives an exponential improvement over the state of the art for threshold gates (including AND/OR gates) of high fan-in. Our construction can be efficiently instantiated with practical symmetric-key primitives (e.g., AES), and is proven secure under similar assumptions to that of the Free-XOR garbling scheme (Kolesnikov & Schneider, ICALP 2008).

Joint work with Tal Malkin and Mike Rosulek.

### Thursday, August 11th, 2016

There has been an explosion of interest in secure program obfuscation, especially indistinguishability obfuscation (iO), over the last 3 years. However, this study remains in its infancy. In this talk, we'll give an overview of the current state of research on constructing iO, focusing on where we stand in terms of attacks and security models. In particular, we'll discuss recent work showing constructions of iO provably secure assuming only weak forms of multilinear maps.

Indistinguishability obfuscation is a central primitive in cryptography. Security of existing multilinear maps constructions on which current obfuscation candidates are based is poorly understood. In a few words, multilinear maps allow for checking if an arbitrary bounded degree polynomial on hidden values evaluates to zero or not. All known attacks on multilinear maps depend on the information revealed on computations that result in encodings of zero. This includes the recent annihilation attacks of Miles, Sahai and Zhandry [EPRINT 2016/147] on obfuscation candidates as a special case.

Building on a modification of the Garg, Gentry and Halevi [EUROCRYPT 2013] multilinear maps (GGH for short), we present a new obfuscation candidate that is resilient to these vulnerabilities. Specifically, in our construction the results of all computations yielding a zero provably hide all the secret system parameters. This is the first obfuscation candidate that weakens the security needed from the zero-test.

Formally, we prove security of our construction in a weakening of the idealized graded encoding model that accounts for all known vulnerabilities on GGH maps. (Joint work with Pratyay Mukherjee and Akshayaram Srinivasan)

Functional encryption lies at the frontiers of current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO) while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of \emph{secret-key functional encryption} with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key crypto, but on the other hand, we do not know how to construct it without the heavy hammers in obfustopia.

In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia.

On the technical side, our result relies on two main components. As our first contribution, we show how to use secret key functional encryption to get ``exponentially-efficient indistinguishability obfuscation'' (XIO), a notion recently introduced by Lin et al. (PKC '16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build \IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme.

Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS '15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.

Joint work with Ryo Nishimaki, Alain Passelègue, and Daniel Wichs

Functional Encryption (FE) is a generalisation of public key encryption where the secret key corresponds to a function f, the ciphertext corresponds to some input x, and decryption enables a user to recover f(x) and nothing more.

There has been a lot of activity in the cryptographic community for constructing FE. The two prevalent approaches are 1) Assuming the existence of indistinguishability obfuscation (iO) or multilinear maps: most known candidates have been broken, and hardness of those that survive is poorly understood. 2) Based on standard assumptions: best known constructions achieve FE for general Boolean circuits [GKPVZ13,GVW15], but with restrictions on the adversary in the security game.

In this talk, I will discuss new approaches to constructing FE. Our focus is arithmetic circuits, general adversaries and standard

Partly based on a joint work with Alon Rosen.

I will first show that VBB obfuscation is not possible to achieve in a large variety of structured and algebraic oracle models (e.g. random trapdooer permutation or constant-degree graded encoding model over any finite ring) and then show how to use this result to prove lower bounds on black-box complexity of indistinguishability obfuscation from a large set of primitives (e.g. TDPs or DDH).

Based on joint works with Ameer Mohammed, Soheil Nematihaji, Rafael Pass, and Abhi Shelat.

We give the first demonstration of a cryptographic hardness property of the Goldreich-Goldwasser-Micali (GGM) pseudo-random function family when the secret key is exposed. We prove that for any constant c>0, the GGM family is a 1/n^{2+c}-weakly one-way family of functions, when the lengths of seeds, inputs, and outputs are equal. Namely, any efficient algorithm fails to invert GGM with probability at least 1/n^{2+c} -- even when given the secret key. We additionally state a natural condition under which the GGM family is strongly one-way. Along the way, we prove a purely combinatorial lemma for 'GGM-like' functions, relating the size of the image of such functions to the statistical distance between certain 'sub-functions'.

In this talk, we will discuss our recent lower bounds on the communication and round complexity of secure Multi-Party Computation (MPC) protocols. In the information theoretic setting, we show that protocols based on the traditional secret-sharing design paradigm cannot be significantly improved with respect to the round and communication complexity even in the preprocessing model. In the computational setting, we revisit the exact round complexity of two-party as well as multi-party computation protocols in the simultaneous message exchange model since there was no known lower bound in the multi-party setting without trusted setup. In particular, we establish a lower bound on the round complexity connected to the round complexity of non-malleable commitments.

This talk is based on joint works with Ivan Damgård, Jesper B. Nielsen, Michael Raskin and Sanjam Garg, Pratyay Mukherjee, Omkant Pandey

Classical results on secure multi-party computation (MPC) imply that fully secure computation, including fairness (either all parties get output or none) and robustness (output delivery is guaranteed), is impossible unless a majority of the parties is honest. Recently, cryptocurrencies like Bitcoin where utilized to leverage the fairness loss in MPC against a dishonest majority. The idea is that when the protocol aborts in an unfair manner (i.e., after the adversary receives output) then honest parties get compensated (in coins) by the adversarially controlled parties.

We describe the first robust MPC protocol with compensation, i.e., an MPC protocol where not only fairness is guaranteed (via compensation) but additionally the protocol is guaranteed to deliver output to the parties that get engaged. This means that the adversary, after an initial round of deposits, is not even able to mount a denial-of-service attack without having to suffer a monetary penalty. Our robust MPC protocol requires only a constant number of (coin-transfer and communication) rounds. To prove the security of our construction, we put forth a new formal model of secure MPC with compensation and show how the introduction of suitable ledger and synchronization functionalities makes it possible to describe such protocols using standard interactive Turing machines (ITM) circumventing the need for the use of extra features that are outside the standard model as in previous works. Our model is expressed in the universal composition setting with global setup and is equipped with a composition theorem that enables the design of protocols that compose safely with each other and within larger environments alongside other protocols with compensation; such a composition theorem for MPC protocols with compensation was not known before.

This is joint work with Aggelos Kiayias and Hong-Sheng Zhou.

*synchronous*MPC parties are connected by a synchronous network with a global clock, and protocols proceed in \emph{rounds} with strong delivery guarantees, whereas

*asynchronous*MPC protocols can be deployed even in networks that deliver messages in an arbitrary order and impose arbitrary delays on them.

### Friday, August 12th, 2016

No abstract available.

In a multi-party fair coin-flipping protocol, the parties output a common (close to) unbiased bit, even when some corrupted parties try to bias the output. In this work we focus on the case of dishonest majority, i.e. at least half of the parties can be corrupted. Cleve (STOC 1986) has shown that in any m-round coin-flipping protocol the corrupted parties can bias the honest parties' common output bit by 1/m. For more than two decades the best known coin-flipping protocols against dishonest majority was the protocol of Awerbuch, Blum, Chor, Goldwasser, and Micali [Manuscript 85], who presented a t-party, m-round protocol of bias t/\sqrt{m}. This was changed by the breakthrough result of Moran, Naor and Segev (TCC 2009), who constructed an m-round, 2-party coin-flipping protocol with optimal bias of 1/m. Recently, Haitner and Tsafadia (STOC 14) constructed an m-round, three-party coin-flipping protocol with bias O(log^3(m) / m). Still for the case of more than three parties, the best known protocol remains the \Theta(t / \sqrt{m})-bias protocol of Awerbuch et al.

We make a step towards eliminating the above gap, presenting a t-party, m-round coin-flipping protocol, with bias O(\frac{t * 2^t * \sqrt{\log m}}{m^{1/2+1/(2^{t-1}-2)}}). This improves upon the \Theta(t / \sqrt{m})-bias protocol of Awerbuch et al. for any t < 1/2 * log(log(m)), and in particular for t \in O(1), this yields an 1/m^{1/2 + \Theta(1)}-bias protocol. For the three-party case, this yields an O(\sqrt{log m}/m)-bias protocol, improving over the the O(log^3m / m)-bias protocol of Haitner and Tsafadia. Our protocol generalizes that of Haitner and Tsafadia, by presenting an appropriate "defense protocols" for the remaining parties to interact in, in the case that some parties abort or caught cheating (Haitner and Tsafadia only presented a two-party defense protocol, which limits their final protocol to handle three parties).

We analyze our new protocols by presenting a new paradigm for analyzing fairness of coin-flipping protocols. We map the set of adversarial strategies that try to bias the honest parties outcome in the protocol to the set of the feasible solutions of a linear program. The gain each strategy achieves is the value of the corresponding solution. We then bound the optimal value of the linear program by constructing a feasible solution to its dual.

This is a joint work with Niv Buchbinder, Iftach Haitner, and Eliad Tsfadia.

Consider encrypting $n$ inputs under $n$ independent public keys. Given the ciphertexts $\{c_i=\Enc_{\pk_i}(x_i)\}_i$, Alice outputs ciphertexts $c'_1,\ldots,c'_n$ that decrypt to $y_1,\ldots,y_n$ respectively. What relationships between the $x_i$'s and $y_i$'s can Alice induce?

Motivated by applications to delegating computations, Dwork, Langberg, Naor, Nissim and Reingold \cite{DLNNR04} showed that a semantically secure scheme disallows \emph{signaling} in this setting, meaning that $y_i$ cannot depend on $x_j$ for $j \neq i$ . On the other hand if the scheme is homomorphic then any \emph{local} (component-wise) relationship is achievable, meaning that each $y_i$ can be an arbitrary function of $x_i$. However, there are also relationships which are neither signaling nor local. Dwork et al. asked if it is possible to have encryption schemes that support such ``spooky'' relationships. Answering this question is the focus of our work.

Our first result shows that, under the $\LWE$ assumption, there exist encryption schemes supporting a large class of ``spooky'' relationships, which we call \emph{additive function sharing} (AFS) spooky. In particular, for any polynomial-time function $f$, Alice can ensure that $y_1,\ldots,y_n$ are random subject to $\sum_{i=1}^n y_i = f(x_1,\ldots,x_n)$. For this result, the public keys all depend on common public randomness. Our second result shows that, assuming sub-exponentially hard indistinguishability obfuscation (iO) (and additional more standard assumptions), we can remove the common randomness and choose the public keys completely independently. Furthermore, in the case of $n=2$ inputs, we get a scheme that supports an even larger class of spooky relationships.

We discuss several implications of AFS-spooky encryption. Firstly, it gives a strong counter-example to a method proposed by Aiello et~al. \cite{ABOR00} for building arguments for $\NP$ from homomorphic encryption. Secondly, it gives a simple 2-round multi-party computation protocol where, at the end of the first round, the parties can locally compute an additive secret sharing of the output. Lastly, it immediately yields a function secret sharing (FSS) scheme for all functions.

We also define a notion of \emph{spooky-free} encryption, which ensures that no spooky relationship is achievable. We show that any non-malleable encryption scheme is spooky-free. Furthermore, we can construct spooky-free \emph{homomorphic} encryption schemes from SNARKs, and it remains an open problem whether it is possible to do so from falsifiable assumptions.

Pass (TCC 2013) proved that any commitment scheme with non-malleability with respect to commitment, using only two messages of communication, cannot be proved secure via a black-box reduction to any "standard" intractability assumption. We extend this by showing a similar implausibility for two-message bi-directional commitments with non-malleability with respect to opening, another standard notion of non-malleability for commitments.

However, somewhat surprisingly, we show that this barrier breaks down in the synchronous setting, with two unidirectional messages by the committer (and no receiver message), for non-malleability with respect to opening.