Abstract

This talk explores the use of machine learning to empower sequential experimental design. A prominent challenge in sequential experimental design is quantifying the value of information: how much will the measurement from an experiment help us in planning future experiments to accomplish a desired goal? Historically, the value of information was analyzed using well-understood probabilistic models such as Gaussian processes. The use of learning offers the potential of improved flexibility and representational power, but also brings with challenges in principled algorithm design (e.g., whether we can even expect to have calibrated uncertainties). This talk surveys several projects that studies this challenge from different angles, including frequentist ensembles, (Bayesian) deep kernel learning, and directly modeling the value of information using deep neural networks.

Video Recording