# Letter from the Director, Summer 2017

The research programs of the Simons Institute in Fall 2016 and Spring 2017 continued the Institute’s tradition of considering fundamental questions about computation in all its forms. The wide range of provocative topics addressed by these programs included decision-making in an uncertain environment, the power of randomization, ethical challenges from artificial intelligence, and the foundations of quantum mechanics. This newsletter presents articles on all of these programs, as well as an advance look at the Fall 2017 “jumbo” (double-sized) program on Bridging Continuous and Discrete Optimization. It also contains a feature article on the pleasures and perils of machine learning, by Journalist in Residence John Markoff.

Here I give brief snapshots of these articles, in the hope of enticing you to read further. For those interested in probing more deeply, video recordings of the lectures from the boot camps and workshops of these programs are available through our website.

**Machine learning becomes the new “plastics,” transforming the once isolated world of CS theory****by John Markoff **

This article by our Journalist in Residence considers ethical challenges presented by the recent success of deep neural nets in areas such as computer vision, robotics and language translation. Many researchers agree that while automated general intelligence does not appear to be in the cards, many concrete decision-making tasks will be transferred from humans to algorithms. Such automated systems may help us address fundamental challenges of health, productivity and climate change, but also threaten the obsolescence of many kinds of human labor, the invasion of our privacy by automated surveillance systems, and the dangers in ceding the power over human safety and even life and death to autonomous decision-making algorithms whose limits and power are not completely understood.

**Research Vignette: Algorithms and Uncertainty (Fall 2016)****by Amos Fiat**

Amos Fiat’s Research Vignette, “Setting Posted Prices Under Uncertainty,” considers algorithms that present a choice of outcomes to selfish agents arriving online. Agents have a utility or disutility for each outcome. The algorithm seeks to optimize social welfare or revenue, but may not know the preferences of the agents, which may even be motivated to misstate their preferences. Through an example of assigning arriving jobs to machines with different speeds so as to minimize makespan, the article illustrates a simple scheme that dynamically sets prices on each machine before the next agent arrives. The algorithm elicits truthful behavior and comes close to maximizing social welfare.

**Research Vignette: Logical Structures in Computation (Fall 2016)****by Albert Atserias**

The setting for this article on entangled solutions to logic puzzles is a combinatorial game played by Alice and Bob against an adversary. After an initial gathering to agree on a common strategy, Alice and Bob will not be allowed to communicate during the game. Suppose that, before they separate, they prepare a pair of qubits in an entangled state, and Alice receives one of them and Bob the other. It turns out that the ability of Alice and Bob to coordinate their actions in the game by measuring their qubits gives them an advantage. The fact that entanglement confers an advantage has consequences for the foundations of quantum mechanics, the study of the local structure of conjunctive database queries, and the question of whether the local structure of a finite model can determine it up to isomorphism.

**From the Inside: Machine Learning (Spring 2017)****by Matus Telgarsky**

The central focus of this program was on supervised learning, in which an algorithm learns from a teacher who presents training examples together with appropriate responses.

In interactive learning, the algorithm plays an active role by specifying useful training examples. For example, a self-driving car might improve its behavior by being exposed to challenging driving conditions.

In representation learning, the input data is transformed into a convenient or revealing form: for example, a natural language utterance may be represented by a vector of selected features. A representation theory workshop focused on a set of real-life examples exhibiting the strengths and weaknesses of neural network representations.

The final workshop of the program dealt with learning problems that are information-theoretically easy but appear to be computationally hard: i.e., where a small number of training examples suffice, but the processing of those examples requires exponential time.

**From the Inside: Pseudorandomness (Spring 2017)****by Valentine Kabanets**

What does it mean for a fixed object to be “random-like”? Can a deterministic algorithm generate such objects efficiently? Can randomized algorithms be efficiently simulated without any truly random coin tosses? What is random about the distribution of prime numbers? Such questions were the subject matter of the Spring 2017 program on Pseudorandomness.

The first workshop of the program was entitled Expanders and Extractors. A randomness extractor is a function that extracts almost uniform random bits from sources of biased and correlated bits. A recent breakthrough is a construction due to Eshan Chattopadhyay and David Zuckerman of a randomness extractor from two independent weakly random sources. A related breakthrough result by Gil Cohen was an advance on a central question in the branch of combinatorics known as Ramsey Theory: the explicit construction of graphs on n nodes that contain neither a large clique nor a large independent set.

The second workshop concerned the construction of pseudorandom generators – efficient algorithms for converting a short truly random binary string into a much longer string that cannot be distinguished from a truly random one by a computationally limited observer.

The final workshop, Structure Versus Randomness, concerned the representation of mathematical objects as a “structured part” plus a “pseudorandom part,” such that the structured part has a certain property and the pseudorandom part can be essentially ignored. Examples of such decomposition results include Szemeredi’s regularity lemma in graph theory and Impagliazzo’s hardcore lemma in complexity theory.

**Looking Ahead: Bridging Continuous and Discrete Optimization (Fall 2017)****by Luca Trevisan**

This extra-large program aims to create bridges between the study of discrete optimization algorithms and their complexity in computer science, and the study of continuous optimization as studied in electrical engineering and numerical analysis.

There will be a focus on the exact or approximate solution of discrete optimization problems using continuous convex relaxations. This will include impossibility results establishing the limits of applicability of relaxations based on linear or semidefinite programming. A second focus will be on new methods for the solution of Laplacian systems of linear equations in nearly linear time, using graph-theoretic constructions of preconditioners to speed up the convergence of gradient descent methods. These methods can be applied in turn to combinatorial problems such as the computation of maximum network flows, demonstrating the ability of methods from linear algebra to surpass the classical purely combinatorial methods for these problems.

These articles illustrate the ever-growing breadth and depth of the theory of computing. I hope you find them enjoyable and illuminating.

Dick Karp

Director, Simons Institute

## Related Articles

- Machine Learning Becomes the New “Plastics,” Transforming the Once-Isolated World of CS Theory
- From the Inside: Foundations of Machine Learning
- From the Inside: Pseudorandomness
- Research Vignette: Entangled Solutions to Logic Puzzles
- Research Vignette: Setting Posted Prices Under Uncertainty
- Looking Ahead: Bridging Continuous and Discrete Optimization