From the Inside: Algorithms and Uncertainty

by Avrim Blum

Algorithms have become increasingly important in everyday life, in ways both visible and invisible. Examples include path-finding algorithms for routing traffic on the Internet or for telling us the fastest way to get from place to place on our roads, searching and clustering algorithms that quickly answer web-search queries and display the results in an intelligent manner, algorithms for efficient management of the power grid, and many, many more.

The development of interesting algorithms in fact has a long history, with methods for solving various “word problems” appearing on ancient Babylonian clay tablets from 1600 BCE, and Euclid in 300 BCE developing his famous algorithm for quickly computing greatest common divisors. Today, the design of algorithms for tackling all kinds of problems is an important part of computer science research.

Traditionally, when designing an algorithm to solve some problem, it is assumed one has complete uncertainty about an instance before it is handed to an algorithm and then complete certainty about the instance afterwards. For example, in designing an algorithm to compute the fastest way to get from place to place in a road network (called the “shortest path problem” in computer science), the algorithm should be able to perform well on any road network. So in designing the algorithm, the networks that one cares about are unknown. At runtime, the algorithm will be handed a specific road network with specific driving times for each road segment, and a specific start and end location, and then it will solve for the best route to take. So, in running the algorithm, the relevant information is fully known.

However, many of today's algorithmic problems have a somewhat different character. First, at design-time, there often are some properties of the cases one cares about that are known in advance and could be used to advantage. For example, in designing clustering algorithms for various AI applications, one can often assume the data to be clustered will indeed contain natural groupings (otherwise even the best algorithm will do a poor job). Or in problems of choosing what generators to run in a power grid to satisfy demands while minimizing cost, there is often significant structure to what the inputs look like. So, in this sense, the classic approach to algorithm design that ignores this information at design-time can be too pessimistic. On the other hand, at runtime, often some relevant information is still not fully known, such as exactly how much congestion there is on some road in a road network, what a user is really searching for in answering web-search queries, or precisely how much energy a solar array will generate in a given time period. So, in this sense, the classic approach to algorithm design, which assumes all information will be available at runtime, is too optimistic.

The goal of this semester-long program on Algorithms and Uncertainty has been to bring researchers together to discuss and tackle both of these issues, which we term the yin and the yang of uncertainty in algorithm design. Researchers have been examining approaches for making decisions under uncertainty, such as allocating resources in the presence of unknown future demand, as well as ways of taking advantage of inherent structure that is present in “typical” cases.

The program began with a “boot camp,” in which experts presented background in topics ranging from stochastic optimization methods to Markov Decision Processes, to the kinds of algorithmic challenges that arise when incorporating renewable energy sources into the electrical grid. A first workshop then examined a variety of issues in optimization and decision-making under uncertainty. These include fundamental trade-offs between information-gathering (exploration, trying something new) and using the information gathered so far (exploitation). Also discussed were topics such as design of more efficient clinical trials, algorithms for pricing products in unknown markets, and methods for scheduling computational tasks. A second workshop focused on non-worst-case analysis of algorithms, including models that interpolate between worst-case and average-case analysis, the use of principles from machine learning theory to adapt algorithms to the typical cases in a given application, and experiences of researchers from various application areas. In between, researchers have been making advances in addressing a range of algorithmic problems that involve both these yin and yang aspects of uncertainty.

Overall, this program has been an exciting semester in which researchers from many different countries and institutions have come together to collaborate on furthering the science of algorithm design for tackling today's and tomorrow's computational challenges.

Related Articles

,