Description

Recently there has been substantial interest in the development of quantum machine learning protocols.  However, despite this there have been a number of stumbling blocks that have emerged for quantum machine learning.  Specifically, recent work showing dequantizations of quantum algorithms as well as barren plateau results have shown that new ideas are needed to build and train quantum models that have advantage over classical models.  In this talk I will discuss the challenges that arise when trying to use quantum mechanical effects to train neural networks and show that entanglement, long viewed as a boon for quantum computers, is actually generically anathema for deep quantum neural networks and by drawing on lessons from the thermalization community that unchecked entanglement causes these models to be exponentially close to the maximally mixed state and that gradient descent is not capable of remedying the situation.  I will then look at a partial solution to these problems, which involves switching to generative learning and show that quantum neural networks can be efficiently trained using quantum analogues of the KL-divergence, which do not suffer from barren plateau problems due to its logarithmic divergence for orthogonal states (which combats gradient decay).  I will show numerical evidence that indicates that small scale quantum neural networks can be trained to generate complex quantum states and in turn suggest that generatively pre-training models may be a way to circumvent known barren plateau results for quantum machine learning.

Panel discussion: Patrick Coles (Los Alamos National Labs)Aram Harrow (MIT)Maria Schuld (UKZN and Xanadu)Jarrod McClean (Google)Umesh Vazirani (UC Berkeley; moderator)

YouTube Video
Remote video URL
Remote video URL