Looking Ahead: Logic and Uncertainty

by Luca Trevisan

Mathematical logic has been related to computer science since before computer science existed as an academic field. The work of Turing, Church and Post, motivated by questions in mathematical logic, established a rigorous definition of algorithms and founded the theory of computability. In the subsequent eighty years, mathematical logic has provided important tools used in several areas of computer science: it provided inspiration for early results in computational complexity, it provided the language to reason about relational databases, and type theory and the semantics of lambda calculus strongly influenced the theory and practice of programming languages. In the 1930s, as Turing, Church and Post were discovering the theory of computation, von Neumann initiated the study of logics to reason about quantum systems, which, as a special case, capture also classical probabilistic systems.In the Fall 2016 semester, the Simons Institute will host a program on Logic and a program on Algorithms and Uncertainty.

The program on Logical Structures in Computation, organized by Samson Abramsky, Anuj Dawar, Phokion G. Kolaitis, and Prakash Panangaden, will involve researchers interested in logical models of probabilistic and quantum systems, database theory, and algorithmic model theory. One theme of the program will be the use of logical systems to reason about probabilistic models and about decision-making under uncertainty (a topic of common interest with the concurrent program that we describe below). Model theory provides tools to reason about knowledge. The third theme of the program is the application of logics to database theory and data science.

The program on Algorithms and Uncertainty, organized by Anupam Gupta, Stefano Leonardi, Avrim Blum, Robert Kleinberg, Eli Upfal, and Adam Wierman, will focus on the design and analysis of algorithms that can operate without full knowledge of their input. This program ties together various techniques to analyze algorithms without a full knowledge of the input, and it connects various “beyond-worst-case-analysis” approaches to the analysis of algorithms.

Stochastic optimization and online algorithms are two classical frameworks to analyze algorithms that lack full information about their input. In stochastic optimization, one has a probabilistic model of the input, possibly derived from noisy observations, and one wants to construct a solution to the problem that is good on average, according to the known input distribution. In an online algorithm, data are revealed (or discovered) over time, and the algorithm has to make decisions about the data items known up to a certain point in time, without the benefit of knowing which data items will come later.

A framework related to stochastic optimization is one in which the input to the algorithm is a noisy version of a ground truth, and one would like to find a solution that is good, in an appropriate sense, for the ground truth input. In this case, one should be concerned about the stability of the algorithm to noisy perturbations. In some cases, it is preferable to apply a suboptimal but stable algorithm to the given data than an optimal but unstable one, since the unstable algorithm might return a solution that is optimized for the noisy data but is a poor solution for the ground truth. This is a well-known concern in machine learning applications, in which the optimization problem is to find a model of the data that minimizes loss: an unstable optimal algorithm can lead to overfitting.

The framework of smoothed analysis of algorithms is similar: one is given a noisy version of an unknown input. In this approach, one is looking for a solution that is good for the given noisy input, and the surprising fact is that, in many cases, noise helps designing algorithms, by ruling out worst-case instances that are fragile to random noise. Smoothed analysis is an example of average-case analysis of algorithms under semi-random distributions of inputs, in which there is a combination of adversarial and probabilistic features in the given inputs.

While stochastic optimization, online algorithms, stability analysis, smoothed analysis, and average-case analysis in semi-random models are all related, they are studied by different communities that do not often come together to share insights. Similarly, researchers interested in logical models of quantum and probabilistic systems do not often interact with logicians interested in database theory, despite their common interest in modeling data analysis. The programs on logic and on algorithms under uncertainty will provide a venue for such interactions, and we are looking forward to the opportunities for new collaborations that the programs will enable.

Related Articles:

Letter from the Director, Spring 2016
From the Inside: Counting Complexity and Phase Transitions
From the Inside: Algorithmic Challenges in Genomics
Research Vignette: Strengthening SETH Lower Bounds
Research Vignette: Simplicity, Optimality and Efficiency Tradeoffs in Economics

,