Tuesday, August 9th, 2016

9:30 am10:30 am
Graded encoding schemes [Garg et al. EUROCRYPT 2013] play a central role in constructing general purpose indistinguishability obfuscation (IO). Up until recently, all known IO schemes rely on either meta-assumptions that encapsulates an exponential family of assumptions (e.g., Pass et al. Crypto 2014), or polynomial families of assumptions on graded encoding schemes with a high polynomial degree/multilinearity (e.g., Gentry et al. FOCS 2014).
 
In this talk, we present two recent works on simplifying graded encodings needed for constructing IO [Lin EUROCRYPT 2016 and Lin-Vankuntanathan FOCS 2016]. In sum,  IO can be constructed from only constant-degree graded encodings, with a security reduction to a DDH-like assumption — called the joint-SXDH assumption — on the graded encodings, and the sub-exponential security of a polynomial-stretch PRG in NC0. Our assumption on graded encodings is simple, has constant size, and does not require handling composite-order rings. This narrows the gap between the mathematical objects that exist (bilinear maps, from elliptic curve groups) and ones that suffice to construct general purpose IO.
11:00 am11:20 am

We abstract the core of the bitcoin protocol, termed the bitcoin backbone in earlier work, now including the target recalculation mechanism, and we show that it can be instantiated to implement a robust transaction ledger. This is, to our knowledge, the first full analysis of the bitcoin protocol in a dynamic environment where the number of participants is allowed to evolve over time. 

Joint work with Juan Garay and Nikos Leonardos.
 

11:20 am11:40 am

Nakamoto’s famous blockchain protocol enables achieving consensus in a so-called permissionless setting—anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents “sybil attacks” (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. “moderately hard functions”) introduced by Dwork and Naor (Crypto’92).

Prior works that analyze the blockchain protocol either make the simplifying assumption that network channels are fully synchronous (i.e. messages are instantly delivered without delays) (Garay et al, Eurocrypt’15) or only consider specific attacks (Nakamoto’08; Sampolinsky and Zohar, FinancialCrypt’15); additionally, as far as we know, none of them deal with players joining or leaving the protocol.

We prove that the blockchain consensus mechanism satisfies a strong forms of consistency and liveness in an asynchronous network with adversarial delays that are a-priori bounded, within a formal model allowing for adaptive corruption and spawning of new players, assuming that the computational puzzle is modeled as a random oracle. (We complement this result by showing a simple attack against the blockchain protocol in a fully asynchronous setting, showing that the “puzzle-hardness” needs to be appropriately set as a function of the maximum network delay.)

As an independent contribution, we define an abstract notion of a blockchain protocol and identify appropriate security properties of such protocols; we prove that Nakamoto’s blockchain protocol satisfies them and that these properties are sufficient for typical applications. We finally show how to use our analysis to build new blockchain protocols that overcome some of the bottlenecks in Nakamoto’s original protocol.

The analysis of Nakamoto’s blockchain is based on joint work with Lior Seeman and abhi shelat, and new blockchain protocols are based on joint work with Elaine Shi. No prior knowledge of Bitcoin or the blockchain will be assumed.

11:40 am12:00 pm

The distributed systems and cryptography literature has traditionally focused on the "permissioned" model where protocol participants are known a priori. Bitcoin's rapid rise to fame represents an exciting breakthrough: it popularized a "permissionless" model where anyone can join and leave dynamically, and there is no prior knowledge of other participants.

Bitcoin's core protocol is commonly referred to as a "blockchain", which, roughly speaking, realizes a consensus abstraction ensuring consistency and liveness. Today's blockchain protocols, however, suffer from two main drawbacks that have given rise to vigorous debates in the community: 1) existing protocols have terrible performance; 2) existing protocols are not incentive compatible and selfish mining attacks are well-known.

In this talk, we present two latest results that address these painpoints, Fruitchain and Hybrid Consensus. Fruitchain is a new, game theoretically secure blockchain design that incentivizes honest behavior. Hybrid Consensus offers efficiency bootstrapping for permissionless consensus: we show how to leverage a slow blockchain protocol to bootstrap classical Byzantine Fault Tolerance protocols, such that we can achieve consensus in the permissionless setting while attaining the performance of their permissioned counterparts.

Joint work with Rafael Pass.

12:00 pm12:20 pm

The Scrypt algorithm has been designed to be a sequential memory-hard function, i.e., the product of time and memory required to evaluate it should be large. Scrypt has found widespread applications in the context of password hashing (in fact, the same design methodology underlies Argon2d, one of the winning designs in the recent password-hashing competition).

However, providing an actual proof that Scrypt is indeed sequential memory-hard has been a challenging open problem. I will review the work initiated during the Simons program on providing provable guarantees for Scrypt.

Joint work with Joel Alwen, Binyi Chen, Chethan Kamath, Vladimir Kolmogorov, Krzysztof Pietrzak.

2:00 pm3:00 pm
Under the Decisional Diffie-Hellman (DDH) assumption, we present a 2-out-of-2 secret sharing scheme that supports a compact homomorphic evaluation of branching programs on the shares.  
 
This yields a number of DDH-based applications, including:
   - A secure 2-party computation protocol for evaluating any branching program or NC1 circuit, where the communication complexity is linear in the input and output size (and only the running time grows with the size of the branching program or circuit).
   - A secure 2-party computation protocol for evaluating any leveled boolean circuit of size S with communication complexity O(S/ log S).
   - A 2-party function secret sharing scheme for general branching programs, with inverse polynomial error probability.
   - A 2-server private information retrieval scheme supporting general searches expressed by branching programs.
 
Prior to our work, similar results could only be achieved using fully homomorphic encryption.
 
Joint work with Niv Gilboa and Yuval Ishai.
 

 

Wednesday, August 10th, 2016

11:00 am11:20 am
Informally, non-malleable codes provide a means of transforming an adversarial channel into a channel whose output is either identical to or unrelated to the input. We show how to construct efficient, unconditionally secure non-malleable codes for bounded output locality. In particular, our scheme is resilient against functions such that any output bit is dependent on at most n^δ bits, where n is the total number of bits in a codeword and 0≤δ<1 a constant. Notably, this tampering class includes NC^0.
 
Joint work with Marshall Ball, Dana Dachman-Soled, and Mukul Kulkarni.
11:20 am11:40 am

Memory hard functions (MHFs), first explicitly introduced by Percival, are a promising key-stretching tool for password hashing because the cost of storing/retrieving items from memory is relatively constant across different computer architectures. Thus, in contrast to standard cryptographic hash functions (e.g., SHA256) the cost of computing an MHF cannot be significantly reduced by developing customized hardware (ASICs). More specifically, we want to ensure that any circuit evaluating multiple instances of the MHF has high amortized AT-complexity --- Area X Time/#instances. Data-Independent Memory Hard Functions (iMHFs) are an important variant of MHFs due to their greater resistance to side-channel attacks. An iMHF can be specified by a directed acyclic G specifying data-dependencies during computation. Due to the recently completed Password Hashing Competition we have many candidate iMHFs, but many of these iMHFs had not been analyzed until recently.

This talk will summarize recent results demonstrating that a combinatorial property called depth-robustness fully characterizes iMHFs with high amortized-AT complexity. We will also show that Argon2i, the winner of the password hashing competition, is defined using a directed acyclic graph G that is not depth-robust. The resulting attacks are practical for realistic settings of the Argon2i parameters.

The talk is based on joint work with Joel Alwen and Krzysztof Pietrzak.

11:40 am12:00 pm

We show how to garble a large database and then a sequence of adaptively and adversarially chosen RAM programs that query and modify the database in arbitrary ways. The security property is that the garbled database and programs reveal only the outputs of the programs when run in sequence on the database. The runtime, space requirements and description size of the garbled programs are proportional only to those of the plaintext programs and the security parameter. The construction is based on indistinguishability obfuscation for circuits and poly-to-one collision-resistant hash functions.

As an immediate application, our scheme is the first to provide a way to outsource large databases to untrusted servers, and later query and update the database over time in a private and verifiable way, with complexity and description size proportional to those of the unprotected queries.

Joint work with Ran Canetti, Yilei Chen and Justin Holmgren

12:00 pm12:20 pm

In this talk I will present recent results on achieving perfect zero knowledge for Interactive Probabilistically-Checkable Proofs and Interactive Oracle Proofs.

2:00 pm3:00 pm

An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible.

We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a "balls and bins" fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data.

We prove that for the OFFLINE case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs.

Joint work with Elette Boyle

3:30 pm3:50 pm
We present an *adaptive* and *non-interactive* protocol for verifying arbitrary efficient computations in *fixed* polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g., the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the prover and the verifier do not need to interact at all: The verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements in a completely adaptive manner. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines.
 
Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions.
 
We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message delegation protocols for batch NP-statements. Specifically, we can simultaneously prove the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a *single* witness.
 
This is joint work with Zvika Brakerski and Justin Holmgren
3:50 pm4:10 pm

We present simple, practical, and powerful new techniques for garbled circuits. These techniques result in significant concrete and asymptotic improvements over the state of the art, for several natural kinds of computations. For arithmetic circuits over the integers, our construction results in garbled circuits with free addition, weighted threshold gates with cost independent of fan-in, and exponentiation by a fixed exponent with cost independent of the exponent. For boolean circuits, our construction gives an exponential improvement over the state of the art for threshold gates (including AND/OR gates) of high fan-in. Our construction can be efficiently instantiated with practical symmetric-key primitives (e.g., AES), and is proven secure under similar assumptions to that of the Free-XOR garbling scheme (Kolesnikov & Schneider, ICALP 2008). 

Joint work with Tal Malkin and Mike Rosulek. 

4:10 pm4:30 pm
We revisit the problem of building cryptographic primitives with updating capabilities. We introduce the notion of updatable randomized encodings for circuits and study its implications to updatable cryptography. Starting from an updatable randomized encoding scheme we show how to generically obtain updatable versions of traditional notions such as secure multi-party computation, non-interactive zero knowledge proofs and also more advanced primitives such as functional encryption, attribute based encryption and so on. 
 
We propose a construction of updatable randomized encodings based on compact secret-key functional encryption. Plugging in known constructions of functional encryption, we get constructions of updatable randomized encodings based on one-way functions and multilinear maps. We also consider a decomposable version of updatable randomized encodings a.k.a updatable garbled circuits and show how to construct it based on the inapproximability of lattice problems. 
 
We also study obfuscation for evolving software. We introduce the notion of patchable indistinguishability obfuscation and provide a construction based on indistinguishability obfuscation and DDH.
 
Based on joint works with Aloni Cohen, Abhishek Jain and Amit Sahai.
 

 

Thursday, August 11th, 2016

9:30 am10:30 am
Speaker: Amit Sahai, UCLA

There has been an explosion of interest in secure program obfuscation, especially indistinguishability obfuscation (iO), over the last 3 years. However, this study remains in its infancy. In this talk, we'll give an overview of the current state of research on constructing iO, focusing on where we stand in terms of attacks and security models. In particular, we'll discuss recent work showing constructions of iO provably secure assuming only weak forms of multilinear maps.

11:00 am11:20 am

Indistinguishability obfuscation is a central primitive in cryptography. Security of existing multilinear maps constructions on which current obfuscation candidates are based is poorly understood. In a few words, multilinear maps allow for checking if an arbitrary bounded degree polynomial on hidden values evaluates to zero or not. All known attacks on multilinear maps depend on the information revealed on computations that result in encodings of zero. This includes the recent annihilation attacks of Miles, Sahai and Zhandry [EPRINT 2016/147] on obfuscation candidates as a special case. 

Building on a modification of the Garg, Gentry and Halevi [EUROCRYPT 2013] multilinear maps (GGH for short), we present a new obfuscation candidate that is resilient to these vulnerabilities. Specifically, in our construction the results of all computations yielding a zero provably hide all the secret system parameters. This is the first obfuscation candidate that weakens the security needed from the zero-test. 

Formally, we prove security of our construction in a weakening of the idealized graded encoding model that accounts for all known vulnerabilities on GGH maps. (Joint work with Pratyay Mukherjee and Akshayaram Srinivasan)

11:20 am11:40 am

Functional encryption lies at the frontiers of current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO) while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of \emph{secret-key functional encryption} with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key crypto, but on the other hand, we do not know how to construct it without the heavy hammers in obfustopia.

In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia.

On the technical side, our result relies on two main components. As our first contribution, we show how to use secret key functional encryption to get ``exponentially-efficient indistinguishability obfuscation'' (XIO), a notion recently introduced by Lin et al. (PKC '16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build \IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme.

Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS '15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.

Joint work with Ryo Nishimaki, Alain Passelègue, and Daniel Wichs

11:40 am12:00 pm

Functional Encryption (FE) is a generalisation of public key encryption where the secret key corresponds to a function f, the ciphertext corresponds to some input x, and decryption enables a user to recover f(x) and nothing more.

There has been a lot of activity in the cryptographic community for constructing FE. The two prevalent approaches are 1) Assuming the existence of indistinguishability obfuscation (iO) or multilinear maps: most known candidates have been broken, and hardness of those that survive is poorly understood. 2) Based on standard assumptions: best known constructions achieve FE for general Boolean circuits [GKPVZ13,GVW15], but with restrictions on the adversary in the security game. 

In this talk, I will discuss new approaches to constructing FE. Our focus is arithmetic circuits, general adversaries and standard assumptions, but we restrict the class of functions that is supported. I will discuss how to build FE to support bounded degree polynomials, where non-succinct ciphertext can be achieved by relying on LWE and succinct ciphertext can be achieved by additionally relying on some non-standard assumptions. I will discuss the barriers to basing the succinct ciphertext construction on standard assumptions alone.

Partly based on a joint work with Alon Rosen.

12:00 pm12:20 pm

I will first show that VBB obfuscation is not possible to achieve in a large variety of structured and algebraic oracle models (e.g. random trapdooer permutation or constant-degree graded encoding model over any finite ring) and then show how to use this result to prove lower bounds on black-box complexity of indistinguishability obfuscation from a large set of primitives (e.g. TDPs or DDH). 

Based on joint works with Ameer Mohammed, Soheil Nematihaji, Rafael Pass, and Abhi Shelat.

 

2:00 pm3:00 pm

We give the first demonstration of a cryptographic hardness property of the Goldreich-Goldwasser-Micali (GGM) pseudo-random function family when the secret key is exposed. We prove that for any constant c>0, the GGM family is a 1/n^{2+c}-weakly one-way family of functions, when the lengths of seeds, inputs, and outputs are equal. Namely, any efficient algorithm fails to invert GGM with probability at least 1/n^{2+c} -- even when given the secret key. We additionally state a natural condition under which the GGM family is strongly one-way. Along the way, we prove a purely combinatorial lemma for 'GGM-like' functions, relating the size of the image of such functions to the statistical distance between certain 'sub-functions'.

3:30 pm3:50 pm

In this talk, we will discuss our recent lower bounds on the communication and round complexity of secure Multi-Party Computation (MPC) protocols. In the information theoretic setting, we show that protocols based on the traditional secret-sharing design paradigm cannot be significantly improved with respect to the round and communication complexity even in the preprocessing model. In the computational setting, we revisit the exact round complexity of two-party as well as multi-party computation protocols in the simultaneous message exchange model since there was no known lower bound in the multi-party setting without trusted setup. In particular, we establish a lower bound on the round complexity connected to the round complexity of non-malleable commitments.

This talk is based on joint works with Ivan Damgård, Jesper B. Nielsen, Michael Raskin and Sanjam Garg, Pratyay Mukherjee, Omkant Pandey

 
3:50 pm4:10 pm

Classical results on secure multi-party computation (MPC) imply that fully secure computation, including fairness (either all parties get output or none) and robustness (output delivery is guaranteed), is impossible unless a majority of the parties is honest. Recently, cryptocurrencies like Bitcoin where utilized to leverage the fairness loss in MPC against a dishonest majority. The idea is that when the protocol aborts in an unfair manner (i.e., after the adversary receives output) then honest parties get compensated (in coins) by the adversarially controlled parties.

We describe the first robust MPC protocol with compensation, i.e., an MPC protocol where not only fairness is guaranteed (via compensation) but additionally the protocol is guaranteed to deliver output to the parties that get engaged. This means that the adversary, after an initial round of deposits, is not even able to mount a denial-of-service attack without having to suffer a monetary penalty. Our robust MPC protocol requires only a constant number of (coin-transfer and communication) rounds. To prove the security of our construction, we put forth a new formal model of secure MPC with compensation and show how the introduction of suitable ledger and synchronization functionalities makes it possible to describe such protocols using standard interactive Turing machines (ITM) circumventing the need for the use of extra features that are outside the standard model as in previous works. Our model is expressed in the universal composition setting with global setup and is equipped with a composition theorem that enables the design of protocols that compose safely with each other and within larger environments alongside other protocols with compensation; such a composition theorem for MPC protocols with compensation was not known before.

This is joint work with Aggelos Kiayias and Hong-Sheng Zhou.

4:10 pm4:30 pm
Secure multi-party computation (MPC) allows several mutually distrustful parties to securely compute a joint function of their inputs and exists in two main variants: Insynchronous MPC parties are connected by a synchronous network with a global clock, and protocols proceed in \emph{rounds} with strong delivery guarantees, whereasasynchronous MPC protocols can be deployed even in networks that deliver messages in an arbitrary order and impose arbitrary delays on them.
 
The two models---synchronous and asynchronous---have to a large extent developed in parallel with results on feasibility and asymptotic efficiency improvements in both tracks. The most notable gap in this parallel development is with respect to round complexity. In particular, although under standard assumptions on a synchronous communication network (availability of secure channels and broadcast), synchronous MPC protocols with (exact) constant rounds have been constructed, to the best of our knowledge, thus far no constant-round asynchronous MPC protocols  based on standard assumptions are known, with the best protocols requiring a number of rounds that is linear in the multiplicative depth of the arithmetic circuit computing the desired function.
 
In this work we close this gap by providing the first constant-round asynchronous MPC protocol  that is optimally resilient (i.e., it tolerates up to t < n/3 corrupted parties), adaptively secure, and makes black-box  use of a pseudo-random function. It works under the standard network assumptions for protocols in the asynchronous MPC setting, namely, a complete network of point-to-point (secure) asynchronous channels with eventual delivery and asynchronous Byzantine agreement (aka consensus). We provide formal definitions of these primitives and a proof of security in the Universal Composability framework.
 
This is joint work with Sandro Coretti, Martin Hirt and Vassilis Zikas.

Friday, August 12th, 2016

9:30 am10:30 am

No abstract available.

11:20 am11:40 am

In a multi-party fair coin-flipping protocol, the parties output a common (close to) unbiased bit, even when some corrupted parties try to bias the output. In this work we focus on the case of dishonest majority, i.e. at least half of the parties can be corrupted. Cleve (STOC 1986) has shown that in any  m-round coin-flipping protocol the corrupted parties can bias the honest parties' common output bit by 1/m. For more than two decades the best known coin-flipping protocols against dishonest majority was the protocol of Awerbuch, Blum, Chor, Goldwasser, and Micali [Manuscript 85], who presented a t-party, m-round protocol of bias t/\sqrt{m}. This was  changed by the breakthrough result of Moran, Naor and Segev (TCC 2009), who constructed an m-round, 2-party coin-flipping protocol with optimal bias of 1/m. Recently, Haitner and Tsafadia (STOC 14) constructed an m-round, three-party coin-flipping protocol with bias O(log^3(m) / m). Still for the case of more  than three parties, the best known protocol remains the \Theta(t / \sqrt{m})-bias protocol of Awerbuch et al.

We make a step towards eliminating the above gap, presenting a t-party, m-round coin-flipping protocol, with bias O(\frac{t * 2^t * \sqrt{\log m}}{m^{1/2+1/(2^{t-1}-2)}}). This improves upon the \Theta(t / \sqrt{m})-bias protocol of Awerbuch et al. for any t < 1/2 * log(log(m)), and in particular for t \in O(1), this yields an 1/m^{1/2 + \Theta(1)}-bias protocol. For the three-party case, this yields an O(\sqrt{log m}/m)-bias protocol, improving over the the O(log^3m / m)-bias protocol of Haitner and Tsafadia. Our protocol generalizes that of Haitner and Tsafadia, by presenting an appropriate "defense protocols" for the remaining parties to interact in, in the case that some parties abort or caught cheating (Haitner and Tsafadia only presented a two-party defense protocol, which limits their final protocol to handle three parties).

We analyze our new protocols by presenting a new paradigm for analyzing fairness of coin-flipping protocols. We map the set of adversarial strategies that try to bias the honest parties outcome in the protocol to the set of the feasible solutions of a linear program. The gain each strategy achieves is the value of the corresponding solution. We then bound the optimal value of the linear program by constructing a feasible solution to its dual.

This is a joint work with Niv Buchbinder, Iftach Haitner, and Eliad Tsfadia.

11:40 am12:00 pm

Consider encrypting $n$ inputs under $n$ independent public keys. Given the ciphertexts $\{c_i=\Enc_{\pk_i}(x_i)\}_i$, Alice outputs ciphertexts $c'_1,\ldots,c'_n$ that decrypt to $y_1,\ldots,y_n$ respectively. What relationships between the $x_i$'s and $y_i$'s can Alice induce?

Motivated by applications to delegating computations, Dwork, Langberg, Naor, Nissim and Reingold \cite{DLNNR04} showed that a semantically secure scheme disallows \emph{signaling} in this setting, meaning that $y_i$ cannot depend on $x_j$ for $j \neq i$ . On the other hand if the scheme is homomorphic then any \emph{local} (component-wise) relationship is achievable, meaning that each $y_i$ can be an arbitrary function of $x_i$. However, there are also relationships which are neither signaling nor local. Dwork et al. asked if it is possible to have encryption schemes that support such ``spooky'' relationships. Answering this question is the focus of our work.

Our first result shows that, under the $\LWE$ assumption, there exist encryption schemes supporting a large class of ``spooky'' relationships, which we call \emph{additive function sharing} (AFS) spooky. In particular, for any polynomial-time function $f$, Alice can ensure that $y_1,\ldots,y_n$ are random subject to $\sum_{i=1}^n y_i = f(x_1,\ldots,x_n)$. For this result, the public keys all depend on common public randomness.  Our second result shows that, assuming sub-exponentially hard indistinguishability obfuscation (iO) (and additional more standard assumptions), we can remove the common randomness and choose the public keys completely independently. Furthermore, in the case of $n=2$ inputs, we get a scheme that supports an even larger class of spooky relationships.

We discuss several implications of AFS-spooky encryption. Firstly, it gives a strong counter-example to a method proposed by Aiello et~al. \cite{ABOR00} for building arguments for $\NP$ from homomorphic encryption. Secondly, it gives a simple 2-round multi-party computation protocol where, at the end of the first round, the parties can locally compute an additive secret sharing of the output. Lastly, it immediately yields a function secret sharing (FSS) scheme for all functions.

We also define a notion of \emph{spooky-free} encryption, which ensures that no spooky relationship is achievable. We show that any non-malleable encryption scheme is spooky-free. Furthermore, we can construct spooky-free \emph{homomorphic} encryption schemes from SNARKs, and it remains an open problem whether it is possible to do so from falsifiable assumptions.

 
 

 

12:00 pm12:20 pm
Motivated by the goal of improving the concrete efficiency of secure multiparty computation (MPC), we study the possibility of implementing an infrastructure for MPC. We propose an infrastructure based on oblivious transfer (OT), which would consist of OT channels between some pairs of parties in the network. We devise information-theoretically secure protocols that allow additional pairs of parties to establish secure OT correlations using the help of other parties in the network in the presence of a dishonest majority. Our main technical contribution is an upper bound that matches a lower bound of Harnik, Ishai, and Kushilevitz (Crypto 2007), who studied the number of OT channels necessary and sufficient for MPC. In particular, we characterize which n-party OT graphs G allow t-secure computation of OT correlations between all pairs of parties, showing that this is possible if and only if the complement of G does not contain the complete bipartite graph K_{n-t,n-t} as a subgraph.
 
Joint work with Ranjit Kumaresan and Srinivasan Raghuraman
2:00 pm3:00 pm
Speaker: Dakshita Khurana, UCLA      
We construct two-message non-malleable commitments with respect to opening in the standard model, assuming one-to-one one-way functions. Our protocol consists of two uni-directional messages by the committer (with no message from the receiver), and is secure against all polynomial-time adversaries in the standard synchronous setting.

Pass (TCC 2013) proved that any commitment scheme with non-malleability with respect to commitment, using only two messages of communication, cannot be proved secure via a black-box reduction to any "standard" intractability assumption. We extend this by showing a similar implausibility for two-message bi-directional commitments with non-malleability with respect to opening, another standard notion of non-malleability for commitments.

However, somewhat surprisingly, we show that this barrier breaks down in the synchronous setting, with two unidirectional messages by the committer (and no receiver message), for non-malleability with respect to opening. 
 
Our techniques are combinatorial, and depart significantly from the commit-challenge-response structure followed by nearly all prior works on non-malleable protocols in the standard model. Our protocol, together with our implausibility result, also resolves the round complexity of block-wise non-malleable codes (Chandran et. al., ICALP 2016) w.r.t. most natural black-box reductions.
 
Joint work with Vipul Goyal and Amit Sahai.