Abstract

This talk presents a method for selecting from a set of causal-effect identification formulas the one with the lowest asymptotic variance, in a sequential setting in which the investigator may alter the data collection mechanism in a data-dependent way with the aim of finding the formula in as few samples as possible. We formalize this setting by using the best-arm-identification bandit framework where the standard goal of learning the arm with the lowest loss is replaced with the goal of learning the arm that will produce the best estimate. We introduce new tools for constructing finite-sample confidence bounds on estimates of the asymptotic variance that account for the estimation of potentially complex nuisance functions, and adapt the best-arm-identification algorithms of LUCB and Successive Elimination to use these bounds. We validate our method by providing upper bounds on the sample complexity and an empirical study on artificially generated data.