No abstract available.
Saturday, October 17th, 2015
In an old paper we proved a relationship between circuit size for SAT and the structure of the polynomial hierarchy. There is a section of that paper that almost no one ever reads. I plan on recalling that result and showing that it is actually relevant to a current problem about playing perfect chess.
In this talk we review five basic, easily accessible results and analyses that I learned from Dick Karp. They span data structures, random walks and computer systems and have directly or indirectly influenced computing in a number of ways.
Disclaimer: If I only managed to learn five basic results from Dick Karp, the fault is not his; it is entirely mine.
Soon after demonstrating the ubiquity of NP-completeness in optimization, Dick Karp did pioneering work on average-case analysis of NP-hard problems as a way to explain the real-life performance of heuristics. Though this line of work was substantially developed and extended in subsequent decades, the dominant paradigm in algorithm design remained worst-case analysis. In recent years there is increasing recognition that in many exciting settings in algorithms design---especially related to machine learning and "big data"---the natural problems are intractable, and are often solved (successfully) at very large scales in practice by heuristics. The talk will survey problems in a host of application domains, and argue that new modes of analysis ---going beyond the worst-case---hold the key to further progress.
The accumulation of diverse genomic data (and other types of 'omics' data) on massive scale creates challenges - and opportunities. Bioinformatics - using techniques from computer science, mathematics and statistics - must address these challenges now. We describe some of the key problems, and demonstrate our efforts towards some of them.
No abstract available.
No abstract available.
The theory of NP-completeness has been the basis for our understanding of important phenomena in the quantum world. The development of the class QMA (the quantum analog of NP) has shed light on the limits of quantum computers as well as the complexity of fundamental problems in condensed matter physics. The connection between these areas arises from the close relationship between the complexity of constraint satisfaction problems and the properties of ground states of local Hamiltonians describing quantum systems. This talk will be a brief survey of how QMA has helped inform our understanding of quantum computation and entanglement as well as some of the questions at the forefront of quantum Hamiltonian complexity.
Human computation deals with the kinds of computation that humans can perform in their heads without paper and pencil. The area blends theoretical computer science, especially its models of computation, with cognitive science. Applications include CAPTCHAS, password creation (and re-creation), and coin-flipping over the phone.
Instances of computational problems often do not exist in isolation. Rather, multiple and correlated instances of the same problem arise naturally in the real world. For example, this has been demonstrated to be the case in cryptography for public-keys generated in practice and will arise in the context of error correction when a hard drive may contain encodings of multiple files within small edit-distance from each other.
We ask how one can gain computational advantage from instance correlations when they exist. We will be interested in settings where significant computational gain can be made in solving a single primary instance by having access to additional auxiliary instances which are correlated to the primary instance via the solution space.
In this work, we focus on studying this question for constraint satisfaction problems (CSPs) with access to multiple instances with correlated solutions.
We consider a variety of naturally occurring worst case and average case CSPs, and show how availability of a small number of auxiliary instances generated through a natural generating process that produces instances with correlated solutions, radically changes the complexity of solving the primary instance, from intractable to expected polynomial time. Essentially, the availability of a logarithmic number of auxiliary instances enables a close polynomial time approximation of the primary solution, and when in addition the "difference vector" between the primary and the auxiliary solution is known, the primary solution can be exactly found. Furthermore, knowing even a single auxiliary instance already enables finding the exact primary solution for a large class of CSPs.
We hope this work is the first step in a wider study of how to exploit correlations in available inputs.
Joint work with Irit Dinur and Rachel Lin.
Richard Karp has had a profound influence on the world of theoretical computer science. I've had the pleasure of learning of his impact in this community in recent years, but he has also influenced my life in many other ways. I will share several of these moments along with some recent results in approximation algorithms and stochastic optimization.