Results 1641 - 1650 of 23856
When and how can we guarantee that the conclusions arrived at by a complicated and expensive data analysis are correct? A sequence of recent works explores the possibility of constructing interactive proof systems that can verify the conclusions using less data and computation than would be needed to replicate the analysis. We would also like the resources needed to construct the proof to be as small as possible (ideally close to the cost of simply performing the analysis).
I will survey this line of work, highlighting two recent results:
- On the positive side, we construct protocols that allow efficient verification for a rich class of analyses
- On the negative side, we show that constructing the proof can require significantly more data than would be needed to simply run the analysis, showing that this is true for several natural data analysis tasks
Based on joint works with Tal Herman
Folding schemes enable efficient constructions of incrementally verifiable computation and proof-carrying data. This talk covers recent work on hash-based folding schemes:
- We introduce interactive oracle reductions (IORs), a natural generalization of interactive oracle proofs.
- Given a suitable IOR, we show how to build a hash-based folding scheme.
- We present a highly efficient IOR: the prover runs in linear time and the verifier makes a constant number of queries (in the length of the statement).
Folding schemes (also known as accumulation schemes) are a powerful tool for building scalable and concretely efficient proof systems. Lattice-based constructions retain many of the desirable properties of elliptic curve-based schemes, while offering the added benefits of potentially faster provers and plausible post-quantum security. In this talk, I will highlight key technical challenges in designing lattice-based folding schemes and present several techniques—primarily related to lattice-based range proofs—for addressing these challenges.
Based on joint work with Dan Boneh.
Probabilistically checkable proofs (PCPs) allow encoding a computation so that it can be quickly verified by only reading a few symbols. Inspired by tree codes (Schulman, STOC'93), we propose tree PCPs; these are PCPs that evolve as the computation progresses so that a proof for time t is obtained by appending a short string to the end of the proof for time t-1. At any given time, the tree PCP can be locally queried to verify the entire computation so far.
We construct tree PCPs for non-deterministic space-s computation, where at time step t, the proof only grows by an additional poly(s,log(t)) bits, and the number of queries made by the verifier to the overall proof is poly(s) * t^epsilon, for an arbitrary constant epsilon > 0.
Tree PCPs are well-suited to proving correctness of ongoing computation that unfolds over time. They may be thought of as an information-theoretic analog of the cryptographic notion of incrementally verifiable computation (Valiant, TCC'08). In the random oracle model, tree PCPs can be compiled to realize a variant of incrementally verifiable computation where the prover is allowed a small number of queries to a large evolving state. This yields the first construction of (a natural variant of) IVC in the random oracle model.
This is a joint work with Alon Rosen and Ron Rothblum.
Incrementally verifiable computation (IVC) [Valiant, TCC'08] allows one to iteratively prove that a configuration x_0 reaches another configuration x_T after repeated applications of a (possibly non-deterministic) transition function M. The key requirement is that the size of the proof and the time to update the proof is sublinear in the number of steps T. IVC has numerous applications, notably including proving correctness of virtual machine executions in blockchains.
Currently, IVC for NP is only known to exist in non-standard idealized models, or based on knowledge assumptions. No constructions are known from standard assumptions, or \emph{even in the random oracle model}. Furthermore, as observed in prior works, since IVC for NP implies adaptive succinct non-interactive arguments for NP, the work of Gentry-Wichs [STOC'11] seemingly poses barriers to constructing IVC for NP from falsifiable assumptions.
In this work, we observe that the Gentry-Wichs barrier can be overcome for IVC for NP. We show the following two results:
* Assuming subexponential iO and LWE (or bilinear maps), we construct IVC for all NP with proof size poly(|x_i|, log T).
* Assuming subexponential iO and injective PRGs, we construct IVC for trapdoor IVC languages where the proof-size is poly(log T). Informally, an IVC language has a trapdoor if there exists a (not necessarily easy to find) polynomial-sized circuit that determines if a configuration x_i is reachable from x_0 in i steps.