Abstract

A variety of fundamental computational problems in many fields (in game theory, economics, stochastic processes, verification, and other areas), which on the surface may look quite different, can all be boiled down to computing or approximating a fixed point for an algebraically-defined function which maps a subset of R^n to itself, and which is specified by an algebraic circuit (or formula) using standard gates from among { + , *, - , / , max , min } and using rational coefficients and constants.

The problems in question share the property that the *existence* of a fixed point is not in doubt: it is guaranteed by some classic fixed point theorem (e.g., Brouwer's, Banach's, Tarski's, etc.). But the proofs of existence do not in general yield an efficient way to find a fixed point. For many of these problems, we neither have efficient (P-time) algorithms, nor do we know them to be NP-hard. Indeed, total search complexity classes such as Papadimitriou's PPAD (= piecewise-linear-FIXP) and FIXP have been developed to characterize their complexity.

For some problems in FIXP, by now we know that any non-trivial approximation of an actual fixed point, even in NP, would resolve long standing open problems in arithmetic-vs.-Turing complexity, in particular the PosSLP problem and the square-root-sum problem. For other such problems, by now we have P-time approximation algorithms, using a variety of methods (including variants of Newton's method combined with linear programming).

What makes such a fixed point problem "hard" or "easy"?

In this talk, I will survey a number of such problems, and I will delineate a taxonomy of the (relative) complexity of such problems, based on both their combinatorial and their numerical "difficulty", and based on the nature of their defining algebraic circuits.

This talk is based on joint works in a series of papers with Mihalis Yannakakis and with Alistair Stewart.

Video Recording