Playlist: 22 videos

Meet the Fellows Welcome Event Fall 2022

Remote video URL
0:13:20
Di Fang (University of California, Berkeley)
https://simons.berkeley.edu/talks/quantum-algorithms-hamiltonian-simulation-unbounded-operators
Meet the Fellows Welcome Event Fall 2022
Visit talk page
Remote video URL
0:14:40
Jin-Peng Liu (Simons Institute, Berkeley)
https://simons.berkeley.edu/talks/quantum-scientific-computation
Meet the Fellows Welcome Event Fall 2022

Quantum computers are expected to dramatically outperform classical computers for certain computational problems. There has been extensive previous work for linear dynamics and discrete models, including Hamiltonian simulations and systems of linear equations. However, for more complex realistic problems, one fundamental challenge is the substantial difference between the linear dynamics of a system of qubits and real-world systems with continuum, nonlinear, and stochastic behaviors. I focus mainly on the design and analysis of quantum algorithms for scientific computational problems, including topics such as linear and nonlinear differential equations, quantum dynamics, and stochastic processes, with applications in areas such as biology and epidemiology, fluid dynamics, quantum chemistry, finance, and machine learning.
Visit talk page
Remote video URL
0:11:10
Ziv Scully (UC Berkeley (formerly CMU))
https://simons.berkeley.edu/talks/how-robust-gittins-index-queueing-case-study
Meet the Fellows Welcome Event Fall 2022

The Gittins index is a tool originally developed to solve the Markovian/Bayesian multi-armed bandit problem. Since this initial development, the Gittins index has been applied to many other online stochastic optimization problems, including scheduling in single-server queueing systems. Unfortunately, a common theme in all of these applications of the Gittins index is that the optimality results are "brittle", with small changes to the problem setting breaking the optimality proof. For instance, in queueing, the optimality proof breaks for systems with multiple servers. But even in problem settings where the Gittins index is not optimal, perhaps it still yields near-optimal performance. How "robust" is the performance of the Gittins index to small changes to the problem setting? We will present some results on the Gittins index in queueing which show that the Gittins index is in certain senses robust. For example, we show that it yields near-optimal performance in certain settings with multiple servers. Underlying our results are new ways of thinking about the Gittins index, which we hope will prove useful in showing similar "robustness" results for the Gittins index in other domains.
Visit talk page
Remote video URL
0:16:26
Chara Podimata (UC Berkeley)
https://simons.berkeley.edu/talks/incentive-aware-machine-learning
Meet the Fellows Welcome Event Fall 2022

As machine learning algorithms are increasingly being deployed for consequential decision-making (e.g., loan approvals, college admissions, probation decisions, etc.) humans are trying to strategically change the data they feed to these algorithms in an effort to obtain better decisions for themselves. If the deployed algorithms do not take these incentives into account they risk creating policy decisions that are incompatible with the original policy’s goal. In this talk, I will give an overview of my work on Incentive-Aware Machine Learning for Decision Making, which studies the effects of strategic behavior both on institutions and society as a whole and proposes ways to robustify machine learning algorithms for strategic individuals. I will look at the problem from a societal lens and discuss the tension that arises between having decision-making algorithms that are fully transparent and incentive-aware.
Visit talk page
Remote video URL
0:13:11
Chen-Yu Wei (Simons Institute)
https://simons.berkeley.edu/talks/adaptive-online-learning-without-prior-knowledge
Meet the Fellows Welcome Event Fall 2022

I will give an overview of my previous works, which center around algorithm design for online learning, and goal is to have a regret adaptive to the problem instance, without prior knowledge about the problem instance. Some examples will be given in the full-information expert problem, bandits, and Markov decision processes. I will conclude with some future directions I am interested in.
Visit talk page
Remote video URL
0:12:35
Gautam Goel (Simons Institute)
https://simons.berkeley.edu/talks/competititve-control
Meet the Fellows Welcome Event Fall 2022

I consider online control in linear dynamical systems and present a new control algorithm which attains an optimal competitive ratio relative to a clairvoyant offline optimal controller, which picks control actions with perfect knowledge of the disturbance sequence. The key technical idea used to derive this algorithm is to reduce competitive control in the original system to Hinf control in a specially constructed synthetic system. I also present numerical simulations which show that competitive controllers can significantly outperform standard H2 and Hinf controllers.
Visit talk page
Remote video URL
0:13:45
William Hoza (Simons Institute)
https://simons.berkeley.edu/talks/pseudorandom-generators-and-small-space-derandomization
Meet the Fellows Welcome Event Fall 2022

Compared to other big open questions in complexity theory, there is a lot of optimism about proving L = BPL in our lifetimes. Over the past few decades, there has been steady progress on the problem from several different angles. Arguably the most promising approach is to design sufficiently powerful pseudorandom generators (PRGs). PRGs have applications beyond derandomization, and they provide deep insights into the powers and limitations of the models of computation that they fool. In this talk, I will give a brief overview of these topics and my research in the area.
Visit talk page
0:14:31
Nilin Abrahamsen (UC Berkeley)
https://simons.berkeley.edu/talks/taming-sign-problem-explicitly-antisymmetrized-neural-networks-rough-activation-functions
Meet the Fellows Welcome Event Fall 2022

Explicit antisymmetrization of a two-layer neural network is a potential candidate for a universal function approximator for generic antisymmetric functions, which are ubiquitous in quantum physics. However, this strategy suffers from a sign problem, namely, due to near exact cancellation of positive and negative contributions, the magnitude of the antisymmetrized function may be significantly smaller than that before antisymmetrization. We prove that the severity of the sign problem is directly related to the smoothness of the activation function. For smooth activation functions (e.g., tanh), the sign problem of the explicitly antisymmetrized two-layer neural network deteriorates super-polynomially with respect to the system size. On the other hand, for rough activation functions (e.g., ReLU), the deterioration rate of the sign problem can be tamed to be at most polynomial with respect to the system size. Finally, the cost of a direct implementation of antisymmetrized two-layer neural network scales factorially with respect to the system size. We describe an efficient algorithm for approximate evaluation of such a network, of which the cost scales polynomially with respect to the system size and inverse precision. with Lin Lin
Visit talk page
Remote video URL
0:20:11
Emmanouil-Vasileios Vlatakis-Gkaragkounis (Simons Institute/FODSI)
https://simons.berkeley.edu/talks/teamwork-makes-von-neumann-work-two-team-zero-sum-games-0
Meet the Fellows Welcome Event Fall 2022

Back in the day, machine learning problems were a drag. Either classification or data-generation, the main tasks of artificial intelligence, had been classified as computationally hard problems. Decades later, with subsequent computing innovation, machines have transformed into their ultra-smart, self-learning, automated versions that are sweeping the human landscape.

This unreasonable effectiveness of modern machine learning algorithms has thrown down the gauntlet to algorithms researchers, and there is perhaps no other problem domain with a more urgent need for the beyond worst-case approach.

In this talk, we will discuss a series of recent results in the area of Local Search, Continuous Optimization and Game Dynamics.
Visit talk page
Remote video URL
0:15:40
Yeganeh Alimohammadi (Stanford)
https://simons.berkeley.edu/talks/algorithms-using-local-graph-features-predict-epidemics
Meet the Fellows Welcome Event Fall 2022

We study a simple model of epidemics where an infected node transmits the infection to its neighbors independently with probability p. The size of an outbreak in this model is closely related to that of the giant connected component in 'edge percolation,' studied for a large class of networks, including configuration model and preferential attachment. Even though these models capture the role of super-spreaders in the spread of an epidemic, they only consider tree-like graphs, i.e., graphs with a few short cycles locally. Here, we ask a different question: what information is needed for general networks to predict the size of an outbreak? Is it possible to make predictions by accessing the distribution of small subgraphs (or motifs)? We answer the question in the affirmative for large-set expanders with Benjamini-Schramm limits. In particular, we show that there is an algorithm that gives a (1−ϵ) approximation of the probability and the final size of an outbreak by accessing a constant-size neighborhood of a constant number of nodes chosen uniformly at random. Based on a joint work with Christian Borgs and Amin Saberi. https://epubs.siam.org/doi/pdf/10.1137/1.9781611977073.136
Visit talk page