The goal of general-purpose program obfuscation is to make an arbitrary computer program "unintelligible" while preserving its functionality. Obfuscation allows us to achieve a powerful capability: software that can keep a secret. This overview talk will provide an introduction to the present state of obfuscation constructions, and discuss the future of obfuscation research.

### Monday, June 8th, 2015

In this talk, we will present what are known as "zeroizing attacks" against two candidate constructions of cryptographic multilinear maps, namely [GGH13] and [CLT13] (with an emphasis on the latter). We will then discuss recent attempts at recovering from these attacks, both by modifying the mathematical construction of the multilinear maps to prevent zeroizing attacks, and by examining what forms of obfuscation can still be achieved in the presence of zeroizing attacks.

We show how to build indistinguishability obfuscation (iO) for Turing Machines where the overhead is polynomial in the security parameter lambda, machine description |M| and input size |x| (with only a negligible correctness error). In particular, we avoid growing polynomially with the maximum space of a computation. Our construction is based on iO for circuits, one way functions and injective pseudo random generators.

Our results are based on new ''selective enforcement'' techniques. Here we first create a primitive called positional accumulators that allows for a small commitment to a much larger storage. The commitment is unconditionally sound for a select piece of the storage. This primitive serves as an ''iO-friendly'' tool that allows us to make two different programs equivalent at different stages of a proof. The pieces of storage that are selected depend on what hybrid stage we are at in a proof.

We first build up our enforcement ideas in a simpler context of ''message hiding encodings'' and work our way up to indistinguishability obfuscation.

Joint work with Allison Bishop Lewko and Venkata Koppula.

All current constructions of indistinguishability obfuscation create obfuscated programs where the size of the obfuscated program is at least a factor of a security parameter larger than the size of the original program, plus additive terms involving a polynomial in the security parameter and a bound on the size of the input to the program being obfuscated.

In this work, we show how to construct the first indistinguishability obfuscation scheme that achieves only a constant multiplicative overhead in the size of the program. The security of our construction requires only the existence of a (sub-exponentially secure) indistinguishability obfuscation scheme for circuits that has any polynomial multiplicative overhead and one-way functions.

Joint work with Abhishek Jain and Amit Sahai.

Inspired by decentralized cryptocurrencies, I will introduce the blockchain model of secure computation which ensures financial fairness in protocol design. I will describe formal modeling issues, and give interesting example protocols such as criminal smart contracts.

I will also talk about how to achieve on-chain privacy for smart contracts as opposed to today's blockchains where financial transactions are stored in cleartext on the blockchain and visible to the public. Finally, I will draw connections to programming language and compiler design.

We show that Rational Proofs do not satisfy basic compositional properties in the case where a large number of "computation problems" are outsourced. We show that a "fast" incorrect answer is more remunerable for the prover, by allowing him to solve more problems and collect more rewards. We present an enhanced definition of Rational Proofs that removes the economic incentive for this strategy and we present a protocol that achieves it for the case of uniform bounded-depth circuits.

Joint work with Matteo Campanelli (CUNY))

The standard method for designing a secure computation protocol for function f first transforms f into either a circuit or a RAM program and then applies a generic secure computation protocol that either handles boolean gates or translates the RAM program into oblivious RAM instructions respectively.

In this work, we show a large class of functions for which a different algorithmic approach to secure computation results in more efficient protocols. The first such examples of this technique was presented by Aggrawal, Mishra, and Pinkas (J. of Cryptology, 2010) for computing the median; later, Brickell and Shmatikov (Asiacrypt 2005) showed a similar technique for minimum spanning tree and a variant of shortest paths.

We generalize the technique in both of those works and show that it applies for a large class of problems including matroid optimizations, sub-modular optimization, convex hulls, and other scheduling problems. The crux of our technique is to securely reduce these problems to secure comparison operations. We then describe general properties under which simulation-based security can be achieved for honest-but-curious and covert adversary models.

### Tuesday, June 9th, 2015

Over the last 30 years, garbled circuits have evolved from a purely theoretical technique to a fundamental and practical cryptographic primitive. In this talk, I will survey the optimizations that have significantly improved the efficiency of garbled circuit constructions. The primary focus will be on reducing the size of garbled circuits, but other performance optimizations of the underlying crypto will be discussed as well.

Protocols for secure computation enable mutually distrustful parties to jointly compute on their private inputs without revealing anything but the result. Over recent years, secure computation has become practical and considerable effort has been made to make it more and more efficient. A highly important tool in the design of two-party protocols is Yao's garbled circuit construction (Yao 1986), and multiple optimizations on this primitive have led to performance improvements of orders of magnitude over the last years. However, many of these improvements come at the price of making very strong assumptions on the underlying cryptographic primitives being used (e.g., that AES is secure for related keys, that it is circular secure, and even that it behaves like an ideal cipher). The justification behind making these strong assumptions has been that otherwise it is not possible to achieve fast garbling and thus fast secure computation. In this paper, we take a step back and examine whether it is really the case that such strong assumptions are needed. We provide new methods for garbling that are secure solely under the assumption that the primitive used (e.g., AES) is a pseudorandom function. Our results show that in many cases, the penalty incurred is not significant, and so a more conservative approach to the assumptions being used can be adopted.

Joint work with Shay Gueron, Ariel Nof and Benny Pinkas.

Secure computation enables mutually distrusting parties to jointly evaluate a function on their private inputs without revealing anything but the function’s output. Generic secure computation protocols in the semi-honest model have been studied extensively and several best practices have evolved.

In this work, we design and implement a mixed-protocol framework, called ABY, that efficiently combines secure computation schemes based on Arithmetic sharing, Boolean sharing, and Yao's garbled circuits and that makes available best practice solutions in secure two-party computation. Our framework allows to pre-compute almost all cryptographic operations and provides novel, highly efficient conversions between secure computation schemes based on pre-computed oblivious transfer extensions. ABY supports several standard operations and we perform benchmarks on a local network and in a public intercontinental cloud. From our benchmarks we deduce new insights on the efficient design of secure computation protocols, most prominently that oblivious transfer-based multiplications are much more efficient than multiplications based on homomorphic encryption. We use ABY to construct mixed-protocols for three example applications - private set intersection, biometric matching, and modular exponentiation - and show that they are more efficient than using a single protocol.

Joint work with Daniel Demmler and Michael Zohner published at NDSS 2015.

We present some work in progress on lower bounds for information theoretic MPC. One set of results is about the number of messages required to compute non-trivial functions such the and, we show that a quadratic number of messages in the number of players is required. In the second line of work we show that protocols following the traditional secret-sharing based design paradigm cannot be significantly improved wrt round and communication complexity, even with preprocessing.

We initiate the study of "elastic noisy channels," where the reliability of the channel from the perspective of an honest user can be significantly worse than the reliability of the channel from the perspective of a malicious user. Such elastic channels naturally arise in situations where a malicious user can choose the antenna it uses to receive signals.

In the setting of elastic channels, we pose the following intriguing question: "Can we obliviously establish multiple channels between Sender and Receiver such that the highest capacity channel to an honest receiver has more capacity than the average capacity channel to the malicious receiver?'' We refer to this phenomenon in elastic noisy channels as: capacity inversion.

In this work, we consider elastic variants of classical noisy channels, namely the binary erasure channel and the binary symmetric channel. We identify non-interactive capacity inverting encodings, i.e. encodings which help achieve capacity inversion, for various parameters of these elastic noisy channels.

Leveraging capacity inversion, we efficiently found general unconditionally secure computation, over a large parameter space of elastic noisy channels. Prior techniques in secure computation only yield such solutions for a minuscule fraction of this space, for this more realistic model of noisy channels.

Joint work with Dakshita Khurana and Amit Sahai

The stock markets have two primary functions, that of providing liquidity and price discovery. While the market micro-structure was mostly ignored or assumed to function ideally for the purpose of asset pricing, M. O'Hara (Journal of Finance, 2003) has established that both liquidity and price discovery affect asset pricing, and in particular asset returns. While the cost of liquidity provision is borne by investors, and is clearly detrimental to asset returns, periodic price discovery has both positive and negative consequences for asset pricing. In this work we propose using cryptography, and in particular multi-party secure computation and zero-knowledge proofs, to setup a novel stock market structure that, to a large extent, removes the negative consequences of liquidity costs and periodic price discovery. Interestingly, the proposed market structure takes us back to the early days of stock markets, i.e. periodic call markets, but with the not so "trusted'' auctioneer replaced by distributed computing where no individual party (or small coalition) gets to know the order book.

In this talk, we will discuss the Columbia-Bell Labs project on scalable private database (DB) querying, work in part sponsored by Intelligence Advanced Research Project Activity (IARPA). We consider complete and scalable provable security of DB Management System, including access control, protection of the data, and, importantly, hiding the SQL query from the server, all while supporting a rich query set. We are restricted by severe performance requirements (10TB, 100M record DB, performance "just a little slower than an insecure DB").

We will present our approach, discuss its benefits and tradeoffs, and highlight some issues that arose in our efforts to achieve both provable security and scale. One of our main tools is Yao SFE, and our private DB search algorithm represents a "practical circuit" that motivates improving SFE performance. We will also report on experimental performance.

This talk is based on works with George Argyros, Steven M. Bellovin, Seung Geol Choi, Ben Fisch, Wesley George, Angelos Keromytis, Fernando Krell, Abi Kumarasubramanian, Tal Malkin, Vasilis Pappas and Binh Vo.

### Wednesday, June 10th, 2015

There is a burgeoning literature surrounding the refinement and implementation of probabilistic proof systems. Applications include delegation of computation and closely related problems such as succinct (zero knowledge) arguments. After quickly summarizing the state of the art, I will give a wishlist for theoretical advances that would facilitate bridging the (still substantial) gap between "implemented protocol" and "practically deployable system."

We survey recent progress in cryptographically verifying computation using zero-knowledge Succinct Non-interactive ARgument of Knowledge (SNARK) schemes. Proposed by Kilian and Micali in the 1990's, such computationally-sound proofs were considered intriguing yet impractical, and invoked mainly for their theoretical implications. Recently, new ideas and schemes lead to both improved theoretical foundations and full working zkSNARK implementations. This talk will discuss these, and in particular:

- Implemented compilers for ensuring the integrity of general computation (e.g., C programs) in the presence of arbitrary platform corruptions.

- A practical application, Zerocash, which solves Bitcoin's privacy problem. Whereas Bitcoin publicly broadcast all transactions and account balances, Zerocash replaces these by privacy-preserving zkSNARK proofs.

Based primarily on joint works with Eli Ben-Sasson, Alessandro Chiesa, Christina Garman, Daniel Genkin, Matthew Green, Ian Miers and Madars Virza.

I will talk about a recent line of work that integrates traditional secure computation with a formal financial framework. This line of work identifies and abstracts some key transaction functionalities offered by the Bitcoin network, and shows how to incentivize correct behavior in secure computation (and other cryptographic tasks) in a model where parties have access to such a transaction functionality.

We overview recent and ongoing work investigating blockchain protocols from a provable security perspective. We discuss properties such as common prefix and chain quality and introduce proof techniques for establishing such properties in a synchronous anonymous communication model assuming bounded access to a random oracle. We discuss the consensus problem in this setting and its relation to robust transaction ledgers. We also introduce a new model and a construction for secure computation with fairness and output delivery guarantees that are conditional on a global transaction ledger.

Oblivious Transfer (OT) is the fundamental building block of cryptographic protocols. In this talk I will describe the simplest and most efficient protocol for 1-out-of-2 OT to date, which is obtained by tweaking the Diffie-Hellman key-exchange protocol. The protocol achieves UC-security against active and adaptive corruptions in the random oracle model. Due to its simplicity, the protocol is extremely efficient.

During the talk I am also going to report on an implementation of the protocol. In particular, I will discuss a number of choices which were made to ensure that the security guarantees provided by our proof of security are inherited (to be best possible extent) by our implementation.

The purpose of this result is to simplify and improve secure computation protocols.

Firstly, we simplify the framework of garbling schemes, specifically linear garbling schemes, by reducing simulation-based security to an easy-to-check algebraic property in the random oracle model; we explain how this property tightly connects linear garbling with linear secret sharing. We then provide a new garbling scheme in this framework, improving both on the communication complexity and on the parallelizability of the calls to the random oracle, com- pared to previous state of the art. Our approach takes advantage of partitioning the circuit into “units” that may be functionally more complex than single binary gates. This approach allows us to circumvent the known lower bound on the communication complexity of gate- by-gate linear garbling proved by Zahur, Rosulek, and Evans. Moreover, each specific choice of the partition of the circuit in units gives a specific tradeoff between parallel calls to the random oracle and communication complexity (for example, partitioning the whole circuit into a single unit allows static calls to the random oracle, while smaller units may result in lower communication complexity, depending on the circuit). Each unit is garbled at once, by using a standard polynomial decomposition technique for each unit’s associated function. We also extend this scheme to natively work over arbitrary finite fields, and show that for small fields our approach is more efficient than translating the circuit into boolean and using boolean garbling techniques.

Secondly, we use the same polynomial decomposition technique in a similar fashion to improve the communication complexity of secure computation protocols in the pre-processing model, by extending the notion of Beaver/multiplicative triples to tuples whose algebraic relations may be more complex.

Joint work with Tal Malkin and Abhi Shelat.

We study the achievability of different forms of obfuscation and related primitives A,B through relations of the form A=>not(B) ---this says that A,B cannot both exist--- or A=>B ---this says that if A exists so does B or if B does not exist then neither does A. Specifically: (1) We show that VGBO (Virtual Grey Box Obfuscation) for all circuits, which has been conjectured to be achieved by candidate constructions, would imply the failure of Canetti's 1997 AI-DHI1

(auxiliary input DH inversion) assumption and corresponding AIPO (Auxiliary-Input Point-function Obfuscation) scheme (2) We recover AIPO via a variant AI-DHI2 assumption, certain forms of UCE (Universal Computational Extractors), and a construction from any auxiliary-input OWF (3) We show that iO (indistinguishability Obfuscation) for all circuits implies the impossibility of certain forms of leakage-resilient encryption and other forms of UCE.

Joint work with Mihir Bellare and Igors Stepanovs

http://eprint.iacr.org/2015/

I will describe a construction of indistinguishability obfuscation from a compact form of functional encryption.

Joint work with Prabhanjan Ananth.

### Thursday, June 11th, 2015

Differing inputs obfuscation (diO) is a strengthening of indistinguishability obfuscation (iO) that has recently found applications to improving the efficiency and generality of obfuscation, functional encryption, non-black-box simulation, and several other related primitives. These applications require the “security” of diO to hold even in the presence of an auxiliary input that is generated together with the programs. However, recent negative results cast serious doubt on the plausibility of general-purpose diO with respect to general auxiliary inputs. This leaves open the existence of a variant of diO that is plausible, simple, and useful for applications.

We suggest such a diO variant that we call *public-coin* diO. A public-coin diO restricts the original definition of diO by requiring the auxiliary input to be a public, trapdoor-free, random string which is given as input to all relevant algorithms. In contrast to standard diO, it remains very plausible that current candidate constructions of iO for circuits satisfy the public-coin diO requirement.

We demonstrate the usefulness of the new notion by showing that several applications of diO can be obtained by relying on the public-coin variant instead. These include constructions of succinct obfuscation and functional encryption schemes for Turing Machines as well as obfuscation-based non-black-box simulation for (concurrent) zero-knowledge.

Joint work with Yuval Ishai and Amit Sahai.

We study the question of how to define, construct, and use obfuscators for probabilistic programs. We propose a notion that extends indistinguishability obfuscation to probabilistic circuits: It should be hard to distinguish between the obfuscations of any two circuits whose output distributions at each input are computationally indistinguishable, possibly in presence of some auxiliary input. We call the resulting notion probabilistic indistinguishability obfuscation (pIO). We define several variants of pIO, and study relations among them. Moreover, we give a construction of one of our variants, called X-pIO, from sub-exponentially hard indistinguishability obfuscation (for deterministic circuits) and one-way functions.

We then move on to show a number of applications of pIO. In particular, we first give a general and natural methodology to achieve fully homomorphic encryption (FHE) from variants of pIO and of semantically secure encryption schemes. In particular, one instantiation leads to FHE from any X-pIO obfuscator and any re-randomizable encryption scheme that’s slightly super-polynomially secure. We note that this constitutes the first construction of full-fledged FHE that does not rely on encryption with circular security.

Moreover, assuming sub-exponentially secure puncturable PRFs computable in NC1, sub-exponentially-secure indistinguishability obfuscation for (deterministic) NC1 circuits can be bootstrapped to obtain indistinguishability obfuscation for arbitrary (deterministic) poly-size circuits (previously such bootstrapping was known only assuming FHE with NC1 decryption algorithm).

**Breakout Session Track #1**

Speaker: Jeremiah Blocki, Carnegie-Mellon University

Title: Usable and Secure Human Authentication

Abstract: A typical computer user today manages passwords for many different online accounts. Users struggle with this task ---often forgetting their passwords or adopting insecure practices, such as using the same passwords for multiple accounts and selecting weak passwords. Before we can design good password management schemes it is necessary to address a fundamental question: How can we quantify the usability or security of a password management scheme? In this talk we will introduce quantitative usability and security models. Notably, our user model, which is based on research on human memory about spaced rehearsal, allows us to analyze the usability of a large family of password management schemes while experimentally validating only the common user model underlying all of them. We argue that these quantitative models can guide the development of usable and secure password management schemes. In support of our argument we present Shared Cues, a simple password management scheme in which the user can generate many strong passwords after memorizing a few randomly generated stories. Our password management schemes are precisely specified and publishable: the security proofs hold even if the adversary knows the scheme and has extensive background knowledge about the user (hobbies, birthdate, etc.).

This talk is based on joint work with Manuel Blum and Anupam Datta

**Breakout Session Track #2**

Speaker: Aloni Cohen (MIT)

Title: Publicly Verifiable Software Watermarking

Abstract: Software Watermarking is the process of transforming a program into a functionally equivalent watermarked program in such a way that it is computationally hard to remove the mark without destroying functionality. Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan and Yang (CRYPTO 2001) defined software watermarking and showed that the existence of indistinguishability obfuscation implies that software watermarking is impossible. Given the recent candidate constructions of indistinguishability obfuscation, this result paints a bleak picture for the possibility of meaningful watermarking.

We show that slightly relaxing the functionality requirement gives us strong positive results for watermarking. Namely, instead of requiring the marked program to agree with the original unmarked program on all inputs, we require only that they agree on a large fraction of inputs. With this relaxation in mind, our contributions are as follows.

1. We define publicly verifiable watermarking where marking a program requires a secret key, but anyone can verify that a program is marked. The handful of existing watermarking schemes are secretly verifiable, and moreover, satisfy only a weak definition where the adversary is restricted in the type of unmarked programs it is allowed to produce (Naccache, Shamir and Stern, PKC 1999; Nishimaki, EUROCRYPT 2013). Moreover, our definition requires security against chosen program attacks, where an adversary has access to an oracle that marks programs of her choice.

2. We construct a publicly verifiable watermarking scheme for any family of puncturable pseudo-random functions (PPRF), assuming indistinguishability obfuscation and injective one-way functions. We also give an indication of the limits of watermarking by constructing "unwatermarkable" families of pseudorandom functions.

Joint work with Justin Holmgren and Vinod Vaikuntanathan.

**Breakout Session Track #3**

Speaker: Sunoo Park (MIT)

Title: On Time and Order in Multiparty Computation

Abstract:

The availability of vast amounts of data is changing how we can make medical discoveries, predict global market trends, save energy, improve our infrastructures, and develop new educational strategies. One obstacle to this revolution is the willingness of different entities to share their data with others. The theory of secure multiparty computation (MPC) seemingly addresses this problem in the best way possible. Namely, parties learn the minimum necessary information: the value of the function computed on their joint data and nothing else. However, the theory of MPC does not deal with an important

aspect: {\it when} do different players receive their output? In time-sensitive applications, the timing and order of output discovery may be another important deciding factor in whether parties choose to share their data via the MPC.

In this work, we incorporate time and order of output delivery into the theory of MPC. We first extend classical MPC to \emph{ordered MPC} where different players receive their outputs according to an order which in itself is computed on the inputs to the protocol, and to refine the classical notions of guaranteed output delivery and fairness to require instead \emph{ordered output delivery} and \emph{prefix-fairness}. We then define {\it timed-delay MPCs} where explicit time delays are introduced into the output delivery schedule. We show general completeness theorems for ordered MPCs and timed-delay MPCs. We also introduce a new primitive called \emph{time-line puzzles}, which are a natural extension of classical timed-release crypto, in which multiple events can be serialized in time.

Next, we show how ordered MPC can give rise to MPCs which are provably ``worth'' joining, in competitive settings where relative time of output discovery may deter parties from joining the protocol. We formalize a model of collaboration and design a mechanism in which $n$ self-interested parties can decide, based on their inputs, on an ordering of output delivery and a distribution of outputs to be delivered in the mandated order. The mechanism guarantees a higher reward \emph{for all participants} when joining an ordered MPC or declares that such a guarantee is impossible to achieve. We show a polynomial time algorithm to compute the mechanism for a range of model settings.

Joint work with Pablo Azar and Shafi Goldwasser

**Breakout Session Track #1**

Speaker: Andrew Miller, University of Maryland

Title: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies

Abstract:At this point, you will have seen at least four presentations on Bitcoin-related results. Out of necessity, you will have therefore heard at least one introduction-to-Bitcoin! So this talk will not introduce Bitcoin (For a self-contained introduction and survey on cryptocurrency research, see

our Systemization of Knowledge (SoK) paper [BMCNKF15]).

Instead, I’ll complement the earlier presentations with the following:

1. The Bitcoin user experience is public-key cryptography *made to come alive!* I will give each of you your very own "pet Bitcoin," in the form of a paper wallet. Paper wallets have a public key on the front, and a private key inside a folded paper seal.

2. In the spirit of SoK, I will briefly present a "map" that organizes and recaps the most recent cryptocurrency results (including ones you’ve heard over the past few days!) and reveals many open problems and interesting directions.

3. In particular: while the academic research so far has only modelled a single cryptocurrency in isolation, in actuality there are already hundreds of rival "altcoins" that coexist and compete. The cryptocurrency development "industry" has already invented many surprising techniques for these concurrent systems to either support each other or violently attack! I’ll tell a couple of short stories that motivate adding "multiple cryptocurrency" extensions to our models.

[BMCNKF15] SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies Joseph Bonneau, Andrew Miller, Jeremy Clark, Arvind Narayanan, Joshua A. Kroll, Edward W. Felten https://eprint.iacr.org/2015/

**Breakout Session Track #2**

Speaker: Divya Gupta (UCLA)

Title: Secure Authentication and Communication with an Obfuscated Program: Hosting Services on an Untrusted Cloud

Abstract: We consider a scenario where a service provider has created a software service S and desires to outsource the execution of this service to an untrusted cloud. The software service contains secrets that the provider would like to keep hidden from the cloud. For example, the software might contain a secret database, and the service could allow users to make queries to different slices of this database depending on the user's identity.

This setting presents significant challenges not present in previous works on outsourcing or secure computation. Because secrets in the software itself must be protected against an adversary that has full control over the cloud that is executing this software, our notion implies indistinguishability obfuscation. Furthermore, we seek to protect knowledge of the software S to the maximum extent possible even if the cloud can collude with several corrupted users.

In this work, we provide the first formalizations of security for this setting, yielding our definition of a secure cloud service scheme. We provide constructions of secure cloud service schemes assuming indistinguishability obfuscation and non-interactive zero-knowledge proofs.

At the heart of our paper are novel techniques to allow parties to simultaneously authenticate and securely communicate with an obfuscated program, while hiding this authentication and communication from the entity in possession of the obfuscated program.

Joint work with Dan Boneh, Ilya Mironov and Amit Sahai

**Breakout Session Track #3**

Speaker: Antigoni Polychroniadou, Aarhus University

Title: Efficient Multi-Party Computation: from Passive to Active Security via Secure SIMD Circuits

Abstract: A central problem in cryptography is that of converting protocols that offer security against passive (or semi-honest) adversaries into ones that offer security against active (or malicious) adversaries. This problem has been the topic of a large body of work in the area of secure multiparty computation (MPC). Despite these efforts, there are still big efficiency gaps between the best protocols in these two settings. In two recent works, Genkin et al. (STOC 2014) and Ikarashi et al. (ePrint 2014) suggested the following new paradigm for efficiently transforming passive-secure MPC protocols into active-secure ones. They start by observing that in several natural information-theoretic MPC protocols, an arbitrary active attack on the protocol can be perfectly simulated in an ideal model that allows for additive attacks on the arithmetic circuit being evaluated. That is, the simulator is allowed to (blindly) modify the original circuit by adding a different field element to each wire. To protect against such attacks, the original circuit is replaced by a so-called AMD circuit, which can offer protection against such attacks with constant multiplicative overhead to the size.

Our motivating observation is that in the most efficient known information- theoretic MPC protocols, which are based on packed secret sharing, it is not the case that general attacks reduce to additive attacks. Instead, the corresponding ideal attack can include limited forms of linear combinations of wire values. We extend the AMD circuit methodology to so-called secure "SIMD circuits", which offer protection against this more general class of attacks.

We apply secure SIMD circuits to obtain several asymptotic and concrete efficiency improvements over the current state of the art. In particular, we improve the additive per-layer overhead of the current best protocols from O(n2) to O(n), where n is the number of parties, and obtain the first protocols based on packed secret sharing that "natively" achieve near- optimal security without incurring the high concrete cost of Bracha’s committee-based security amplification method.

Our analysis is based on a new modular framework for proving reductions from general attacks to algebraic attacks. This framework allows us to reprove previous results in a conceptually simpler and more unified way, as well as obtain our new results.

Joint work with Daniel Genkin and Yuval Ishai

**Breakout Session Track #1**

Speaker: Samee Zahur, University of Virginia

Title: Verifiable Computation: Pinocchio, Geppetto and Beyond

Abstract: In this talk, we will be describing some of the recent advances in Verifiable Computation technology, where a computationally weak client can outsource computation to a cloud server and be convinced of computation correctness.

We will start the discussion with a brief intro to Quadratic Arithmetic Programs (QAPs) as they are used by the Pinocchio system, and how we can make the client's verification complexity independent of computation complexity. We will then move on to some of the challenges that still remained, and how the recent Geppetto work tries to address some of those. Finally, if time permits, we will close with some interesting challenges that remain in the way of making it useful in practice.

**Breakout Session Track #2**

Speaker: Justin Holmgren, MIT

Title: Succinct RAM Garbling and Obfuscation

Abstract: Program obfuscation, and in particular the notion of indistinguishability obfuscation (IO) has been shown to have wide-spread applications in cryptography and beyond. However, current candidate indistinguishability obfuscators require programs to first be converted into a circuit or Turing Machine, even if the input program is in fact written for a RAM machines. This does not allow obfuscated programs to efficiently use the capabilities of modern computers.

In this talk, we show how to construct succinct and efficient IO for RAM programs given (sub-exponentially secure) IO for circuits. In addition, the size of our obfuscation depends only polynomially on the input program's description size, and not on its space complexity as in previous work. We first construct a sequence of intermediate primitives based on standard (polynomially secure) IO, culminating in a succinct garbling scheme for RAM programs. We then apply a standard transformation which converts the garbling scheme into an obfuscator with similar efficiency parameters.

Joint work with Ran Canetti

**Breakout Session Track #3**

Speaker: Alessandra Scafuro (BU and Northeastern)

Title: Practical UC security in the Global Random Oracle model.

Abstract: We model the Random Oracle in the GUC (Generalized UC) framework. This new model opens the door to the design of highly efficient protocols that satisfy

strong composability guarantees, without need of any trusted party.

In the Global Random Oracle model, there is one global random oracle that is publicly accessible to all executions and all parties. Contrast this with the

CRS model: the security of each protocol execution relies on the fact that a new CRS is freshly sampled for each new execution and that it is kept private.

Joint work with Ran Canetti and Abhishek Jain

**Breakout Session Track #1**

Speaker: Vanessa Teague, University of Melbourne

Title: End-to-end verifiable voting in a state election

Abstract: Electronic voting computations clearly need to be verifiable, in the sense of providing a public proof that all encrypted votes have been correctly counted. The e-voting literature also includes interesting protocols for ensuring that a voter (who cannot perform cryptographic computations directly) can nevertheless gain some evidence that her vote has been cast as she intended. We describe a system with both these kinds of verifiability, based on the Prêt à Voter end-to-end verifiable voting system. The system ran successfully in the state election in Victoria (Australia) in November 2014, taking a total of 1121 votes from supervised polling places inside Victoria and overseas.

Joint work with Craig Burton, Chris Culnane, Peter Y A Ryan and Steve Schneider.

**Breakout Session Track #2**

Brief Announcements

**Breakout Session Track #3**

Brief Announcements

### Friday, June 12th, 2015

We construct a general multiparty computation (MPC) protocol in the common random string (CRS) model with only two rounds of interaction, which is known to be optimal. In the honest-but-curious setting we only rely on the learning with errors (LWE) assumption, and in the fully malicious setting we additionally assume the existence of non-interactive zero knowledge arguments (NIZKs). Previously, Asharov et al. (EUROCRYPT '12) showed how to achieve three rounds based on LWE and NIZKs, while Garg et al. (TCC '14) showed how to achieve the optimal two rounds based on indistinguishability obfuscation, but it was unknown if two rounds were possible under simpler assumptions without obfuscation.

Our approach relies on \emph{multi-key fully homomorphic encryption (MFHE)}, introduced by Lopez-Alt et al. (STOC '12), which enables homomorphic computation over data encrypted under different keys. We simplify a recent construction of MFHE based on LWE by Clear and McGoldrick (ePrint '14), and give a stand-alone exposition of that scheme. We then extend this construction to allow for a one-round distributed decryption of a multi-key ciphertext. Our entire MPC protocol consists of the following two rounds:

- Each party individually encrypts its input under its own key and broadcasts the ciphertext. All parties can then homomorphically compute a multi-key encryption of the output.

- Each party broadcasts a partial decryption of the output using its secret key. The partial decryptions can be combined to recover the output in plaintext.

Joint work with: Pratyay Mukherjee

We put forward a new approach for the design of efficient multiparty protocols:

1. Design a protocol for a small number of parties (say, 3 or 4) which achieves security against a single corrupted party. Such protocols are typically easy to construct as they may employ techniques that do not scale well with the number of corrupted parties.

2. Recursively compose with itself to obtain an efficient n-party protocol which achieves security against a constant fraction of corrupted parties.

The second step of our approach combines the player emulation technique of Hirt and Maurer (J. Cryptology, 2000) with constructions of logarithmic-depth formulae which compute threshold functions using only constant fan-in threshold gates.

Using this approach, we simplify and improve on previous results in cryptography and distributed computing. In particular:

- We provide conceptually simple constructions of efficient protocols for Secure Multiparty Computation (MPC) in the presence of an honest majority, as well as broadcast protocols from point-to-point channels and a 2-cast primitive.

- We obtain new results on MPC over blackbox groups and other algebraic structures.

The above results rely on the following complexity-theoretic contributions, which may be of independent interest:

- We show that for every integers j,k such that m = (k-1)/(j-1) is an integer, there is an explicit (poly(n)-time) construction of a logarithmic-depth formula which computes a good approximation of an (n/m)-out-of-n threshold function using only j-out-of-k threshold gates and no constants.

- For the special case of n-bit majority from 3-bit majority gates, a non-explicit construction follows from the work of Valiant (J. Algorithms, 1984). For this special case, we provide an explicit construction with a better approximation than for the general threshold case, and also an exact explicit construction based on standard complexity-theoretic or cryptographic assumptions.

Joint work with Gil Cohen, Ivan Damgard, Yuval Ishai, Jonas Kolker, Peter Bro Miltersen and Ran Raz.

An exciting recent direction in MPC research is secure computation of RAM programs. A fundamental building block of MPC for RAM programs is SC-ORAM, i.e. a Secure Computation protocol of the Oblivious RAM functionality, which on input a secret-sharing of a memory address computes a secret-sharing of the corresponding memory record. The main idea for realizing SC-ORAM has been to create a client-server ORAM scheme (which guarantees privacy against the server storing the encrypted memory but not the client who retrieves it) whose client algorithm is "secure computation friendly", in the sense that secure computation techniques yield a plausibly efficient protocol for secure computation of the corresponding shared-input-to-shared-output functionality. So far most efforts on SC-ORAM were devoted to the two-party setting, but since an honest-majority setting makes secure computation fundamentally easier, it seems worthwhile to investigate whether SC-ORAM, and hence secure computation of RAM programs, can be realized more efficiently in this setting. We will report on some initial progress in this direction by describing a design of a 3-party SC-ORAM protocol whose implementation is about two orders of magnitude faster than existing 2-party SC-ORAM implementations, bringing SC-ORAM to within the ballpark of the client-server ORAM schemes.

Protecting software from malware injection is the holy grail of modern computer security. Despite intensive efforts by the scientific and engineering community, the number of successful attacks continues to increase.

We have a breakthrough in preventing adversaries from inserting malware into programs. The key idea is to use the very insertion of the malware itself to allow for the systems to detect it. This is, in our opinion, close in spirit to the famous Heisenberg Uncertainty Principle. The attackers, no matter how clever, no matter when or how they insert their malware, change the state of the system they are attacking. This fundamental idea is a game changer. And our system does not rely on heuristics; instead, our scheme enjoys the unique property that it is proved secure in a formal and precise mathematical sense and with minimal and realistic CPU modification achieves strong provable security guarantees. Thus, we anticipate our system and formal mathematical security treatment to open new directions in software protection.

This is joint work with Richard Lipton and Rafail Ostrovsky.

We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and injective one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is known to be complete.

Previous proposals for basing PPAD-hardness on program obfuscation considered a strong "virtual black-box" notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps.

Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far.

Joint work with Nir Bitansky and Alon Rosen http://eccc.hpi-web.de/

In his seminal result, Cleve [STOC'86] established that secure distributed computation---guaranteeing fairness---is impossible in the presence of dishonest majorities. A generous number of proposals for relaxed notions of fairness ensued, by weakening in various ways the desired security guarantees. While these works also suggest completeness results (i.e., the ability to design protocols which achieve their fairness notion), their assessment is typically of an all-or-nothing nature.

In this work we put forth a comparative approach to fairness. We introduce notions that when presented with two arbitrary protocols, provide the means to answer the question "Which of the protocols is fairer?" The basic idea is that we can use an appropriate utility function to express the preferences of an adversary who wants to break fairness. Thus, we can compare protocols with respect to how fair they are, placing them in a partial order according to this relative-fairness relation.

After formulating such utility-based fairness notions, we turn to the question of finding optimal protocols---i.e., maximal elements in the above partial order. We investigate---and answer---this question for secure function evaluation, both in the two-party and multi-party settings.

Our approach builds on machinery developed in the recently proposed "Rational Protocol Design" framework, by Garay, Katz, Maurer, Tackmann and Zikas [FOCS'13]. The talk will also include a brief introduction to the framework.

This is joint work with Juan Garay (Yahoo Labs), Jon Katz (UMD) and Vassilis Zikas (ETH Zurich).

"Cryptographic Agents" is a new framework that unifies various cryptographic objects, similarly to how the Universal Composition framework unifies various multi-party computation tasks. Cryptographic Agents can model existing and new versions of various forms of obfuscation, functional-encryption, fully-homomorphic encryption and even certain hardness assumptions related to primitives like multi-linear maps. The framework uses a new definition that does not rely on simulation, and hence side-steps typical impossibility results. It also admits a composition theorem, which can be used to derive new constructions.

I shall present the basic framework (appeared at Eurocrypt'15) and some recent results.

Based on joint work with Shweta Agrawal, Shashank Agrawal and Ching-Hua Yu.

Yao's garbled circuit construction is a fundamental construction in cryptography and recent efficiency optimizations have brought it much closer to practice. However, these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of "garbled RAM'' as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size.

In this talk, I will describe a construction with strictly poly-logarithmic overhead in both space and time, based only on the minimal and necessary assumption that one-way functions exist. Furthermore, this construction makes only black-box use of one-way functions. This scheme allows for garbling multiple programs being executed on a persistent database.

Based on joint works with Steve Lu, Rafail Ostrovsky, and Alessandra Scafuro.