Results 2041 - 2050 of 23898
A flurry of exciting recent research has shown that quantum cryptosystems (beyond QKD) can exist relative to certain oracles that break all classical cryptography. But obtaining unrelativized constructions of quantum cryptosystems from assumptions clearly weaker than the existence of one-way functions remained open.
In this talk, I will describe how to base quantum commitments and secure computation on concrete and well-studied mathematical assumptions, from the quantum advantage literature, that do not imply the existence of one-way functions.
The celebrated Karp-Lipton Theorem says that NP is in P/poly, then the polynomial hierarchy collapses to NP^NP. I've been curious for decades about which quantum analogues of this statement hold. In 2010, Andy Drucker and I showed that if NP is in BQP/qpoly -- that is, if quantum advice makes NP-complete problems easy -- then coNP^NP is contained in QMA^PromiseQMA. In 2006, I claimed to show that if PP is in BQP/qpoly, then the counting hierarchy CH collapses to PP, but unfortunately my proof had a gap. Last year my (then-)PhD student Justin Yirka filled the gap, in part by using the main result of me and Drucker from 2010: namely, the "BQP/qpoly = YQP/poly" theorem. I'll tell this whole story and then speculate about where to go next in understanding quantum advice.
Consider the following three questions:
- Alice sends messages separately to Bob and Charlie. Can Bob and Charlie, without communicating, be assured that Alice sent both of them the same message?
- Is it possible to transmit interesting quantum states with only classical communication?
- Are cryptographic hash functions built using quantum-resistant tools necessarily secure against quantum attacks?
A one-shot signatures (OSS) — a new tool from quantum cryptography — gives perhaps surprising answers to these questions. In this talk, I will explain the concept of OSS, discuss a number of interesting applications, and sketch how to construct it.
In the standard local stochastic noise model, we construct a fully-quantum fault tolerance protocol with polyloglog depth overhead (and polylog-space). Our protocol is fully quantum in the sense that it does not assume noiseless auxiliary classical computation and adaptivity. The main component in our construction is a procedure of incorporating classical fault tolerance to remove these assumptions from a current protocol of Nguyen and Pattison. When applied to constant-depth IQP circuits, this allows us to conclude (under complexity-theoretic assumptions) that sampling from noisy quantum circuits of polyloglog-depth gives a superpolynomial quantum advantage.
In a model of fault-tolerant quantum computation with quick and noiseless polyloglog-time auxiliary classical computation, we construct a fault tolerance protocol with constant-space and -time overhead, where hides sub-polylog factors. Our construction utilizes constant-rate quantum locally testable codes (qLTC), new fault-tolerant gadgets on qLTCs/qLDPC codes including sub-logarithmic spacetime overhead magic state distillation, and a new analysis framework which we expect will be of further interest.
To build a large-scale fault-tolerant quantum computer, quantum low-density parity-check (LPDC) codes have been established as promising candidates for low-overhead memory when compared to the surface codes. Performing logical computation on QLDPC memory, however, has been a long-standing challenge in theory and in practice.
In this work, we propose extractors, which is a new primitive that can augment any QLDPC memory into a computational block. In particular, any logical Pauli operator supported on the memory can be fault-tolerantly measured in O(d) physical syndrome extraction cycles, without rearranging qubit connectivity. We further propose the extractor architecture, which is a fixed-connectivity, LDPC architecture built by connecting many extractor-augmented computational (EAC) blocks with bridge systems. When combined with any source of high fidelity |T⟩ states, our architecture can implement universal quantum circuits via parallel logical measurements, such that all single-block Clifford gates are compiled away. The size of an extractor on an n qubit code is \tilde{O}(n), where the precise overhead has immense room for practical optimizations.
Joint work with Alexander Cowtan, Dominic Williamson and Theodore Yoder: arxiv.org/abs/2503.10390.
There has been a revolution in generalized lattice surgery methods for fault-tolerant quantum logic with quantum low-density parity-check codes. The gauging logical measurement procedure is a new primitive for low-overhead generalized lattice surgery. We apply this primitive to perform arbitrary logical Pauli measurements within, and between, code blocks, and to prepare logical magic states by measuring transversal Clifford gates. By combining these logical operations, we devise a scheme for universal fault-tolerant quantum computation with generalized lattice surgery on a fixed connectivity architecture.
Edward Zeng is a third-year PhD student at NYU, supervised by Roland Bauerschmidt. His research interests include spin systems and random matrices.
Since Shor published his famous algorithm 30 years ago, it has seemed that integer factorization requires relatively large quantum circuits, and thus will be a medium- to long-term application of quantum computing. But just in the past year or two, the cost landscape for integer factorization has changed dramatically. The main focus of this talk will be the Jacobi factoring circuit, which factors integers of the form N=P^2 Q in a qubit count and depth roughly proportional to O(log Q). For appropriately chosen parameters, we argue that this circuit may actually provide the first efficiently verifiable proof of quantumness. Given sufficient time I will also survey an array of other very recent results improving the circuit costs for integer factorization, including circuits applicable to RSA integers of the form N=PQ.
We present a protocol for fault-tolerantly implementing the logical quantum random access memory (QRAM) operation, given access to a specialized, noisy QRAM device. For coherently accessing classical memories of size $2^n$, our protocol consumes only $\mathrm{poly}(n)$ fault-tolerant quantum resources (logical gates, logical qubits, quantum error correction cycles, etc.), avoiding the need to perform active error correction on all $\Omega(2^n)$ components of the QRAM device. This is the first rigorous conceptual demonstration that a specialized, noisy QRAM device could be useful for implementing a fault-tolerant quantum algorithm. In fact, the fidelity of the device can be as low as $1/\poly(n)$. The protocol queries the noisy QRAM device $\poly(n)$ times to prepare a sequence of $n$-qubit QRAM \textit{resource states}, which are moved to a general-purpose $\poly(n)$-size processor to be encoded into a QEC code, distilled, and fault-tolerantly teleported into the computation. To aid this protocol, we develop a new gate-efficient streaming version of quantum purity amplification that matches the optimal sample complexity in a wide range of parameters and is therefore of independent interest.
The exponential reduction in fault-tolerant quantum resources comes at the expense of an exponential quantity of purely classical complexity---each of the $n$ iterations of the protocol requires adaptively updating the $2^n$-size classical dataset and providing the noisy QRAM device with access to the updated dataset at the next iteration. While our protocol demonstrates that QRAM is more compatible with fault-tolerant quantum computation than previously thought, the need for significant classical computational complexity exposes potentially fundamental limitations to realizing a truly $\mathrm{poly}(n)$-cost fault-tolerant QRAM.
Joint work with: Connor T. Hann, Sam McArdle, Grant Salton, Quynh T. Nguyen, Aleksander Kubica, Fernando G.S.L. Brandao