This Fall at the Simons Institute

by Prasad Raghavendra (Simons Institute)

This fall, the Simons Institute is hosting two synergistic programs: one on Probability, Geometry, and Computation in High Dimensions and the other on Theory of Reinforcement Learning.

Probability, Geometry, and Computation in High Dimensions
In the study of algorithms for statistical inference tasks in high dimensions, there is a rich interplay among the areas of probability, geometry, and computation. For example, ideas from statistical physics have been brought to bear in understanding algorithm design and structural properties in probabilistic settings. A large body of computational and structural phase transitions predicted via statistical physics is only recently being rigorously validated. Similarly, the phenomenon of “concentration of measure” lies at the intersection of probability and geometry but is related to the problem of dimension-free guarantees for many important algorithms. 

The program aims to bring together computer scientists, mathematicians, physicists, and statisticians toward addressing basic questions such as: Which properties of data can be learned from a small number of samples? Can we characterize general trade-offs between the quality of data (statistical information) and the availability of computational resources? For what kinds of problems is it possible to make algorithmic guarantees that are dimension-free or have minimal dimension dependence? 

The first workshop of this program, to be held September 21 to 25, will be devoted to rigorous evidence for abrupt change in structural properties and computational complexity (computational phase transitions) predicted by statistical physics for a variety of statistical problems. The second workshop will be concerned with concentration of measure, a ubiquitous phenomenon in high-dimensional spaces and a key ingredient in several randomized algorithms and Monte Carlo sampling methods. This workshop will bring high-dimensional geometers and probabilists together with computer scientists to share ideas on applications as well as state-of-the-art techniques. The third and final workshop of the program will focus on algorithms for learning and testing that are dimension-free or have a mild dependence on the dimension. With data being increasingly high dimensional, devising algorithms with such guarantees is not only desirable but also necessary. 

Theory of Reinforcement Learning 
In recent years, reinforcement learning has found exciting new applications to problems in artificial intelligence and robotics. Many of these advances were made possible by a combination of large-scale computation, innovative use of flexible neural network architectures and training methods, and new and classical reinforcement learning algorithms. However, we do not understand the power and limitations of RL-based algorithms. Reinforcement learning’s core issues, such as efficiency of exploration and the trade-off between the scale and the difficulty of learning and planning, have been extensively studied. Yet when it comes to design of scalable algorithms, many fundamental questions remain open. This program aims to reunite researchers across disciplines that have played a role in developing the theory of reinforcement learning. 

The first workshop of the program will be devoted to understanding the success of deep neural networks in reinforcement learning, in particular for algorithms that are able to learn in environments previously thought to be much too large. The second workshop will engage theoretical computer scientists to study whether tools from the classical online algorithms literature can be brought to bear on reinforcement learning setups, since online algorithms traditionally have dealt with a dynamic environment and RL is but one approach to model interactions with a dynamic environment.

The third and final workshop will attempt to gather some of the tools needed to satisfactorily find good policies with off-policy data. Typically in RL, it is assumed that one can choose a policy and obtain data generated by that policy (either by running the policy or through simulation). In many applications, however, obtaining on-policy data is impossible, and all one has is a batch set of data that may be generated by a nonstationary and even unknown policy. Estimating the value of new policies in such settings becomes a hard statistical problem, which will be the focus of this workshop.

While the pandemic has stopped us from traveling, it has simultaneously erased geographic barriers in a sense. Wherever you may be, we hope you will join us for our workshops and other events this fall.

,