Q & A with John Wright
We recently received the good news that our colleague John Wright, a UC Berkeley theorist who is actively involved with the Simons Institute’s Quantum Pod, has been awarded a 2026 Sloan Research Fellowship. Simons Institute Director Venkatesan Guruswami sat down with him to discuss his research and his reflections on the field.
Venkat Guruswami: First of all, congratulations on your Sloan Research Fellowship, and thank you for speaking with us. Your work spans quantum learning theory and some of the most surprising developments in quantum complexity in recent years. To begin with, what first drew you to theoretical computer science? Was there a particular moment, person, or problem that hooked you?
John Wright: I got started in computer science in high school, and when I went to UT Austin for college, I didn’t really know what I wanted to do within the field. At the time, my strongest ambition was probably to design video games, which I guess is a very common career goal for incoming freshmen.
Then, in my second or third semester, I took a course called Analysis of Algorithms with Adam Klivans. It was really a course in discrete mathematics (the UT analogue of our CS 70), and at that level, discrete math is full of fun, mind-bending puzzles. I found it completely addictive. I remember thinking: whatever lets me spend my life doing more of this, that’s what I want to do.
That course really turned me toward theory. After that, I started talking with Adam, and I began doing undergraduate research with him, and I guess I’ve never really looked back.
Venkat: So you began your PhD in a different area before pivoting to quantum computing midway through. How did that transition unfold? What initially caught your attention about quantum computing, and what convinced you that it was the right intellectual home for you?
John: I think I’d always wanted to do physics, going back at least to high school. When I arrived at UT Austin, my original plan was to double-major in computer science and physics. But I could never schedule my second physics course, Electricity and Magnetism, because the required lab always conflicted with my computer science courses. So for that very mundane reason, I never ended up majoring in physics. Still, it was always something that interested me and stayed in the back of my mind.
So when I realized quantum computing was a vibrant area, it felt like a natural fit. The first half of my PhD at Carnegie Mellon was in classical theoretical CS, in an area called hardness of approximation. I had a few good results, but there are significant barriers and major open problems that the field kind of revolves around, and it was just very difficult to make progress on those problems. So about halfway through, in around 2013, I felt a little stuck.
I talked with my advisor, Ryan O’Donnell, about it, and we started brainstorming. We realized that I’d always wanted to do quantum computing and he had always wanted to explore it with a student but had never had one who was interested. So we decided to take the leap together and try something out. One thing that especially appealed to me about quantum at the time was that there were all these theorems in classical computer science that I loved, but they had already been proved in the 1990s. I felt I was a decade or two too late to work on those foundational results. In the quantum world, though, analogous questions were still open. It felt like a chance to do equally fundamental work, but with a physics twist. And I just thought that was really exciting.
Venkat: A significant portion of your work centers on quantum learning and tomography. For readers who may not be steeped in the area, what is quantum tomography trying to accomplish, and why is it such a central problem?
John: Quantum tomography is the problem of figuring out the state of a quantum system. You might run an experiment and end up with a small atom, molecule, or collection of particles, and those particles will have some quantum state associated with them. Your goal is to learn what state that system is actually in. It’s a fundamental problem for a very practical reason: if you run an experiment, you want to know what your experiment produced. If you read experimental quantum papers, they often include a section saying, in effect, “We ran the experiment, did tomography, and found that the final state matched our theoretical prediction.” That’s part of how people validate the theory.
It’s also important now that researchers are building small quantum devices. If you build a quantum gate, for example, you want to know whether it is actually doing what it is supposed to do. You put in an input state, run the gate, look at the output, and then use tomography to learn what that output state is and whether it matches the intended behavior.
More broadly, it’s also a very basic information-theoretic question: How do you learn a quantum object? Even going back to the 1960s, some of the first questions people were asking about quantum information were essentially early forms of quantum learning and tomography.
Venkat: One of the most stunning developments in quantum complexity in recent years was the MIP* = RE theorem, which revealed the extraordinary power of quantum proof systems. At what point did you and your collaborators begin to suspect that nonlocal games might be powerful enough to encode extremely difficult — even undecidable — computations?
John: I forget the exact timeline, but around 2018 we got started on a project in this area. At that point it was known that MIP*, a complexity class corresponding to interacting with two provers that share entanglement, contains NEXP, but it wasn’t known whether it could be any bigger than that. When we started collaborating, we asked whether MIP* might be able to capture something even larger. It seemed a little crazy at the time, but there was nothing telling us it couldn’t.
We first tried to fit exponential space into MIP*, but it felt like a square peg in a round hole. Then we moved up one class even bigger to NEEXP, and things started working surprisingly well. At some point, when the project began to look promising, my collaborator Anand Natarajan told me that the ideas seemed to be pushing toward a place where, if they went further, we might end up putting undecidable problems into the class, and that would have had all sorts of remarkable consequences.
At the time, we both agreed it was too scary a prospect to think about directly, so we said, “Let’s not think about that for the moment. Let’s just make sure the thing in front of us works.” But in the back of our minds, it did start to feel as though the ideas could go all the way. Of course, getting them to go all the way required a great deal of additional work. Still, even a month or two into that first project, it seemed clear that’s where things were headed.
Venkat: And more philosophically, what do you think the result says about the nature of quantum correlations?
John: I think we’ve known for a long time that quantum correlations can have this remarkable property called rigidity. Roughly speaking, that means that if I’m a classical verifier interacting with two quantum parties, I can perform a test that certifies what kind of correlation they share. What our result shows is that this phenomenon can be pushed extraordinarily far: even a very small classical test can certify correlations that require an enormous amount of entanglement, essentially a quantum system of unbounded size. That’s a genuinely surprising feature of the quantum world, with no classical analog.
Venkat: Returning to tomography, in an upcoming STOC paper you present a new algorithm for quantum state learning. At a high level, what is the underlying idea, and what applications do you see for it?
John: At a high level, the idea is this: when you perform tomography, your algorithm often outputs an estimate that has signal in the right direction, meaning it points in some meaningful way toward the true state, but it also has a lot of noise in other directions. That noise can confuse you into thinking the state is one thing when it is actually something else.
What we were able to do was take an existing quantum algorithm with both signal and noise and modify it very carefully so that we amplify the signal and eliminate the noise, at least on average. That means the output of the algorithm, on average, exactly matches the quantity you are trying to estimate.
That turns out to be useful in a lot of settings. In the paper, we ended up with five different applications of this algorithm. One of them is shadow tomography, where instead of learning an entire quantum state, you only want to learn a small number of its features. Using our approach, we gave an optimal shadow tomography algorithm in the high-accuracy regime. More broadly, I think there will continue to be many applications whenever it helps to have an estimator whose noise disappears on average.
Venkat: What advice would you give to a student who is just starting out in quantum, given that the field is moving so rapidly along multiple axes? And what foundations do you think are most important to build?
John: Well, it’s hard to go wrong, because there are so many interesting directions right now. In some sense, the best thing you can do is pick the one that most genuinely inspires you and follow it.
One of the most exciting things happening in quantum is that people are actually building small devices and running them. If you have a good enough proposal, people can sometimes test it on a real device. So I think it’s especially valuable to be strong in theory while also staying aware of what is happening in practice. There is a great deal of important theoretical work to be done on improving these small devices and finding interesting applications for them. At least in the near term, I think that intersection of theory and practice is a very exciting place to be.
Venkat: The Simons Institute’s Quantum Pod has become a vibrant hub of activity. You’ve worked closely with many of its participants. From your perspective, what makes that environment distinctive? And more broadly, how has the Institute helped shape work across quantum computing and its interaction with other areas?
John: So it’s kind of funny — when I was first being recruited to Berkeley, one of the big selling points people kept mentioning was the Simons Institute. And in my head I was thinking, “It can’t be that good.” I came to Berkeley because of the weather, because I had friends here, and because it clearly felt like the best place for me for life reasons. Simons was much farther back in my mind.
But once I got here, I realized it really is that amazing. It’s incredibly collaborative. You see people there all the time. During workshops and semester-long programs, the Institute brings together leading experts from all over the world in one area. Being at Berkeley can start to feel like being at every university at once, because for part of the year everyone you’d want to talk to is suddenly right there.
Even outside those big programs, there are always interesting people around — postdocs and visiting faculty who seem to find their way through the Simons Institute. In the quantum area, for example, we have brainstorming sessions where people talk about the problems they’re excited about, and invite others to work on them. I’ve started collaborations that way more than once — I’ll go to a talk not expecting anything in particular and realize the area is fascinating, and then before long, I’m part of a project in it.
More broadly, I think the Institute has unquestionably shaped quantum computing. You can see it in the sheer number of papers that trace their origins to a Simons Institute workshop or program. I remember, for example, a one-day workshop on pseudorandom unitaries that sparked a huge amount of the important work that followed. That kind of catalytic effect happens all the time. When you bring that many researchers together that often, it’s almost impossible not to have a major impact on the field.