Abstract

We discuss a general approach to hypothesis testing. The main "building block" of the approach is the notion of a "good" observation scheme (o.s.); examples include the cases where an observation is (a) corrupted by white Gaussian noise affine image of a vector ("signal"), (b) a vector with independent Poisson entries with parameters affinely depending on the signal, (c) random variable taking finitely many values with probabilities affinely parameterized by the signal, and (d) naturally defined direct products of o.s.'s (a) - (c) reflecting what happens when passing from a single observation to a collection of independent observations. We show that given a good o.s., a near-optimal test deciding on a (finite) collection of convex hypotheses (i.e., hypotheses stating that the signal underlying observations belongs to a convex compact set associated with the hypothesis) and the risk of the test can be computed efficiently via Convex Optimization. We discuss sequential and dynamical versions of the tests and outline some applications, including near-optimal in the minimax sense recovery of a linear form of a signal observed via a good o.s. The talk is based on joint research with Alexander Goldenshluger and Anatoli Iouditski.

Video Recording