In this presentation, I describe a statistical approach for analyzing the scaling of the empirical running time of an algorithm when applied to sets of inputs of growing size. Unlike asymptotic worst-case analysis, this approach is applicable to sophisticated implementations of high-performance, highly heuristic algorithms on arbitrary distributions of inputs. It is based on standard numerical techniques for fitting models, which are then challenged by extrapolation to larger problem sizes and statistically validated using bootstrap confidence intervals. Our approach permits not only the automatic construction of predictive models of a given algorithm’s running time, but also the comparative evaluation of multiple hypotheses on its scaling, in the form of different parametric functions. Using it, we have obtained empirical scaling analysis results for state-of-the-art TSP and SAT algorithms that challenge conventional wisdom regarding the complexity of these problems.