Sitan Chen (UC Berkeley)
Given the ability to interact with an unknown quantum system, what can we learn about the system and how much data do we need to collect? While this has strong parallels with classical distribution learning/testing problems, there is a crucial extra degree of freedom in the quantum setting. Whereas classically we interact with the unknown distribution simply by drawing i.i.d. samples, here we get to design the particular experiments we perform on our quantum data.
For instance, suppose we want to learn an unknown quantum state given access to many copies. At one extreme, we could load all copies into memory and make a single carefully chosen measurement that is entangled over all copies. While this is the most powerful way to interact with the data, keeping so many copies in memory without any of them decohering is well out of the reach of near-term devices. More realistically, we may only be able to measure copies one at a time, perhaps choosing our measurements in an adaptive fashion based on the outcomes of previous measurements.
I will survey a number of recent works, joint with Sebastien Bubeck, Jordan Cotler, Hsin-Yuan Huang, Jerry Li, and Ryan O'Donnell, showing the first separations in the amount of quantum data needed under these two access models. In many cases, these separations can be quite dramatic, paving the way towards experimentally demonstrating a new kind of information-theoretic advantage for devices with quantum memory.
This talk will assume zero quantum background. In fact, our results build on a number of classical tools for establishing minimax lower bounds for testing/estimation.