Results 1631 - 1640 of 23856
This workshop will be build on recent insights about two types of approximations for CSPs – quantitative approximation (Max-CSPs) and qualitative approximation (Promise CSPs). Talks will be dedicated to the underlying techniques, including analytical...
Collaborative zk-SNARKs, introduced by Ozdemir and Boneh (USENIX’22), are a multi-prover extension of zk-SNARKs in which multiple mutually distrustful provers, each holding a private input, jointly generate a zk-SNARK that attests to the correctness of a computation over their collective secrets.
A sequence of recent works has proposed efficient constructions of collaborative zk-SNARKs following a common template: designing secure multiparty computation (MPC) protocols that emulate the behavior of a zk-SNARK prover, while avoiding non-black-box use of cryptographic primitives.
In this talk, I will survey this framework and highlight recent advances in the design and implementation of collaborative zk-SNARK protocols.
I’ll describe two new memory-checking arguments, Twist and Shout, that exploit repeated structure in the computation to be proved. Shout gives a fast, simple protocol for batch evaluation: proving many evaluations of the same function. By paying a small one-time cost, the prover avoids circuit-based encodings and slashes the per-evaluation cost. Twist extends Shout to efficiently handle read/write memory. Together, Twist and Shout enable SNARKs for CPU execution (zkVMs) that are both simpler and faster than prior work. They also yield a streaming prover with low memory usage, no SNARK recursion required.
Verifying image provenance has become an important topic, especially in the realm of news media. To address this issue, the Coalition for Content Provenance and Authenticity (C2PA) developed a standard to verify image provenance that relies on digital signatures produced by cameras. However, photos are usually edited before being published, and a signature on an original photo cannot be verified given only the published edited image. In this work, we describe VerITAS, a system that uses zero-knowledge proofs (zk-SNARKs) to prove that only certain edits have been applied to a signed photo. While past work has created image editing proofs for photos, VerITAS is the first to do so for realistically large images (30 megapixels). Our key innovation enabling this leap is the design of a new proof system that enables proving knowledge of a valid signature on a large amount of witness data. We run experiments on realistically large images that are more than an order of magnitude larger than those tested in prior work. In the case of a computationally weak signer, such as a camera, we are able to generate a proof of valid edits for a 90 MB image in just over thirteen minutes, costing about $0.54 on AWS per image. In the case of a more powerful signer, we are able to generate a proof of valid edits for a 90 MB image in just over three minutes, costing only $0.13 on AWS per image. Either way, proof verification time is less than a second. Our techniques apply broadly whenever there is a need to prove that an efficient transformation was applied correctly to a large amount of signed private data.
Responsible deployment of AI models in high-stakes societal applications requires that these models be trustworthy—exhibiting properties such as fairness, privacy and interpretability. However, legal and IP constraints often necessitate that models remain confidential, which leads to the breakdown of many trustworthy AI tools in practice. This tension gives rise to a central challenge: how can we prove and verify key properties of ML models without revealing the models themselves? In this talk, I will present my recent work that addresses this challenge using zero-knowledge proofs (ZKPs). Specifically, I will describe: (1) FairProof, a system for publicly certifying individual fairness in neural networks while preserving model confidentiality, and (2) ExpProof, which operationalizes explanations even in adversarial settings. Together, these systems advance the goal of building verifiable and accountable AI.