Thursday, December 15th, 2016

From the Inside: Logical Structures in Computation

by Val Tannen

In 1931, the mathematical logician Kurt Gödel published a paper in which he constructed a true logical assertion that said, essentially, “I am not provable”. This is indeed as bad as it sounds. It indicates an intrinsic limitation of mathematics. As Gödel intended, this work ended the hope, championed by David Hilbert, that mathematics can prove itself free of paradoxes.

Yet the ideas Gödel employed in proving this “negative” result also had enormously positive consequences. It strongly influenced his fellow logicians Alonzo Church, Alan Turing, and Stephen Kleene in laying out mathematical foundations for the questions, what is an algorithm? and, what is computing? Turing’s formulation is still at the core of the modern theory of computing (the Simons Institute’s raison d’être). Church’s formulation still plays an essential role in understanding programming languages. And Kleene’s work founded the mathematical field of recursion theory.

Thus, the name of our program, Logical Structures in Computation, points to intellectual achievements that played a central role in the birth of computer science. Today, mathematical logic continues to have a thriving, and mutually beneficial relationship with many areas of computer science: programming languages, verification, databases, descriptive complexity, and knowledge representation and reasoning. These interactions have led to the emergence of the field of Logic in Computer Science, which spans a vast range of specialized topics and has led to deep and conceptually innovative research.

Within this broad area, one can distinguish two main underlying themes: the interaction of logic with the analysis of algorithms and computational complexity, and the study of the semantics of programs and processes. The aim of our program is to bring together, for the first time on a significant scale, researchers at different ends of this spectrum. The focus is on four different strands: finite and algorithmic model theory, logic and probability, logic and quantum mechanics, and logic and databases.

Finite and algorithmic model theory: In 1950, Boris Trakhtenbrot showed that restricting the meaning of logical sentences to finite (or finitely definable) structures requires new methodologies from logicians. A new branch of mathematical logic was thus inaugurated, one in which computation and definability play central roles. Significantly, finite model theory enables the study of concepts that play a practical role in computer science, such as database queries and constraints, or the specification and verification of properties of hardware and software.

Logic and probability: Modeling and computing when one is uncertain about the assumptions or data is both an inevitable burden and a surprisingly effective strategy in contemporary data science. Probability theory harnesses uncertainty, leading to precise formulations that, when integrated with logical formalisms, cross-fertilize to yield remarkable achievements, as shown in the fields of machine learning and verification. Theorists and practitioners look to design probabilistic logical frameworks and probabilistic programming languages that can place these achievements in the context of reliable software engineering.

Logic and quantum mechanics: The promise of breaking down computational barriers by exploiting quantum entanglement has fascinated the scientific (and the science-fiction!) world for a while, certainly since the publication of Shor’s algorithm for factoring. While a physical realization of this algorithm may still be quite a bit beyond current technology, there is a great deal of lower-hanging fruit. In recent years, a whole research area has developed in which the ideas of logic are used to elucidate quantum information theory, and crucially, the organization and programming of quantum computers.

Logic and databases: Following the insights of E.F. “Ted” Codd, first-order logic became the foundation of a technology that has dominated databases for the last 40 years: the relational data model. Closely related to finite model theory, the theory of databases studies the computational complexity and expressive power of query languages and database constraint formalisms, within, as well as beyond, Codd’s model.

The Logical Structures in Computation program is currently fostering two-way and three-way interactions among these four strands, advancing the state of the art in each of them. The program is also complemented by three workshops with themes that underlie all four strands: compositionality in computation, symmetry in computation, and uncertainty in computation. Finally, the program provides opportunities for interaction with the Simons Institute program on Algorithms and Uncertainty, which ran concurrently with ours this fall.

Related Articles