Cycle benchmarking is a new approach for scalable, complete and efficient error diagnostics that will be essential to understanding and improving quantum computing performance from the NISQ era to fault-tolerance. Cycle benchmarking is born from ideas of randomized benchmarking and exposes tomographic methods as impractical and obsolete. When combined with randomized compiling, cycle benchmarking can identify the full impact of errors and error correlations for any (parallel) gate-combination of interest.  I will show cycle benchmarking data from experimental implementations on multi-qubit superconducting qubit and ion trap quantum computers revealing that: (1) in leading platforms, cross-talk and other error correlations can be much more severe than expected, even many order-of-magnitude larger than expected based on independent error models; (2) these cross-talk errors induce errors on other qubits (e.g., idling qubits) that are an order of magnitude larger than the errors on the qubits in the domain of the gate operation; and thus (3) the notion of "elementary gate error rates" is not adequate for assessing quantum computing operations and cycle benchmarking provides the tool to provide an accurate assessment. I will then discuss how the aggregate error rates measured under cycle benchmarking can be applied to provide a practical bound on the accuracy of applications in what I call the "quantum discovery regime" where quantum solutions can no longer be checked via HPCs. 

Video Recording