Traditionally in theoretical computer science it is assumed that one has complete uncertainty about an instance before it is handed to an algorithm (i.e., worst-case analysis) and complete certainty afterwards (i.e., the instance is fully specified). In practice, however, often neither of these assumptions is quite correct. Instances of optimization problems arising from different application domains, such as SAT instances arising from verification or clustering instances arising from AI applications, often have significant structure. Furthermore, when the instance is first presented, often the objective or model or input isn't completely known; examples include jobs with unknown runtimes, stochasticity in robot motion planning, and unknown agent types in mechanism design.
As a consequence, results from theoretical computer science can be simultaneously too pessimistic (e.g., giving strong hardness of approximation for problems where heuristic methods have significant success) and too optimistic (e.g., producing algorithms that cannot be used due to their brittleness to the instance specification). Both of these issues are fundamentally related, with a strong and vital interplay that incorporates topics such as smoothed analysis, semi-random models, instance-stability notions, and stochastic and robust optimization.
The research programs in these directions have gained considerable momentum lately, making it an opportune time to map out and jointly investigate the future. This program will bring together researchers to develop meaningful alternatives to worst-case analysis of algorithms, using ideas from machine learning, stability analysis and stochastic optimization. The program will also explore new methods for addressing uncertainty in instance specifications, including ideas from control theory and convex optimization.