Looking Ahead, Fall 2015: Economics and Computation, Fine-Grained Complexity and Algorithm Design

By Luca Trevisan

In the coming fall semester, the Simons Institute will host a program on economics and computation and one on “fine-grained” aspects of complexity theory and algorithm design.

Economics and Computation


For example, economic theory provides a model for describing the behavior of agents in a decentralized system, by assuming that each agent associates a utility (a value) to each possible outcome, and the actions of each agent seek to maximize his or her utility. Economic theory predicts that the agents’ choices will converge to a Nash equilibrium, that is, a situation in which each agent’s actions are optimal for the agent given everybody else’s action (hence, nobody has an incentive to change his or her actions).There is a rich body of work at the intersection of economic theory and computer science, motivated both by applications of computer science to economics and by applications of economic theory to computer science.

In distributed network problems such as routing, then, one may ask how much worse a Nash equilibrium can be, in terms of global parameters such as average congestion and delay, compared to the optimal solution computed by a centralized algorithm that optimizes such global parameters. This question, known as the price of anarchy and formulated by Koutsoupias and Papadimitriou, has been actively studied, in several contexts, in the past fifteen years.

The price of anarchy is an example of an idea from economic theory applied to a computer science problem, but there are also computer science ideas applied to models in economic theory. For example, Nash equilibria, mentioned above, are stable configurations in systems of interactive agents studied in economic theory, but are there algorithmic processes that converge to such configurations? It has been shown that computing a Nash equilibrium is PPAD-hard and so, under standard complexity-theoretic assumptions, it is not computable in polynomial time. A number of recent results, both positive and negative, have been developed around the notion of approximate Nash equilibria.

Another application of complexity theory to economic theory is in the setting of combinatorial auctions, that is, mechanisms to sell discrete items, which can be bundled in different ways. The classical results in economic theory describe auction algorithms (called mechanisms) that achieve various desirable properties (such as truthfulness), but which require communication and computation that grow exponentially with the number of auctioned items. Such mechanisms are thus inapplicable to several real-world scenarios, such as auctioning online advertisement placements, or auctioning wireless frequencies; the mechanisms used in such cases had to be ad-hoc and non-truthful, which leads to several other problems. Complexity-theoretic results have determined that exponential communication and time complexity is necessary in several settings, and the design of efficient mechanisms satisfying weaker, but still desirable, properties is a very active area with the potential for significant impact.

The program on Economics and Computation, organized by Costis Daskalakis, Noam Nisan, Christos Papadimitriou, Tim Roughgarden, Ilya Segal, Chris Shannon and Éva Tardos, will gather leading economic theorists and theoretical computer scientists to further the interaction between these two communities and make progress on old and new challenges. Algorithmic game theory is an active area of research in the Bay Area – at Berkeley and Stanford and in the research divisions of several technology companies. To capitalize on this local community, some activities of the program will take place at Stanford, at Google, and at other locations in the San Francisco peninsula.

Fine-Grained Complexity and Algorithmic Design


A similar theory of “hardness of polynomial-time solvable problems” has been developed for graph parameters such as diameter and centrality: they can be computed in cubic time in dense graphs and in quadratic time in sparse graphs, and there are reductions showing that a faster algorithm for one problem would lead to faster algorithms for other problems.If we are given a list of n numbers and we want to know if there are two of them that sum to zero, we can easily check this in O(n) time, by storing the list in a hash table and then checking, for every number, if its negative is present in the list. Similarly, if we want to know if there are four numbers that sum to zero we can do it in O(n2) time by computing all pairwise sums and then applying the idea above (being careful about repetitions). What if we want to know if there are three numbers that sum to zero? No algorithm running in sub-quadratic n2−Ω(1) time is known, no such algorithm exists in restricted models of computation, and it has become a standard conjecture that no such algorithm exists; if this conjecture is true, then several problems in computational geometry that have simple quadratic-time algorithms do not have sub-quadratic algorithms.

Another standard conjecture concerning speeding up brute-force algorithms is Impagliazzo’s and Paturi’s Exponential Time Hypothesis that there is no 2o(n) time algorithm for SAT; the Strong Exponential Time Hypothesis that there is no 2(1−Ω(1))·n time algorithm for SAT is also considered plausible. Such conjectures are known to imply polynomial time lower bounds for polynomial time solvable problems, such as the graph diameter and graph centrality parameters mentioned before.

The Fine-Grained Complexity and Algorithm Design program, organized by Mohan Paturi, Russell Impagliazzo, Dániel Marx, Virginia Vassilesvska Williams and Ryan Williams, is aimed at this refined study of complexity theory, in which we do not just distinguish between polynomial and super-polynomial complexity, but we wish to determine the exact asymptotic complexity of problems. A major theme of the program is whether “brute force search” in various settings is the best possible algorithm. For Boolean satisfiability problems such as SAT, a positive answer implies strong lower bounds by reductions. Surprisingly, Williams proved that a negative answer (that is, satisfiability algorithms significantly faster than brute force search) would also imply circuit lower bounds. The interplay between algorithm analysis and complexity theory is very rich, and it will be the theme of the first workshop, which will take place at the end of September. The other two workshops will focus on the complexity of satisfiability problems and on lower bounds for polynomial-time problems, respectively.

Related Articles:

From the Inside: Cryptography
Research Vignette: Reed, Muller, and Costa: Together at the Simons Institute
Research Vignette: Hard Problems All The Way Up

,