Abstract
With quantum error correction on the horizon, improvements in quantum computing hardware have brought us to a place where substantially large NISQ experiments are possible. With uncorrected errors, experiments suffer from an exponentially vanishing signal as the volume of the computation increases. This makes the NISQ regime one where asymptotics are less relevant and in which the focus is on finite sizes. In this regime, classical methods compete on more even grounds, and even quasi-brute-force approaches might be considered competitive. In this talk, I will introduce the concept of effective quantum volume, which will help me discuss some considerations about the interplay between the computational cost of NISQ experiments, their usefulness, and their resilience to noise. I will also discuss the role of different classical methods in this context. While the beyond-classical nature of current random circuit sampling demonstrations is fairly established, one question remains open: will there be room for useful, beyond-classical NISQ applications before quantum error correction is implemented?