Abstract

Undirected graphs are often used to describe high dimensional distributions.

Under sparsity conditions, the graph can be estimated by using penalization methods, such as the
graphical lasso (Friedman et al., 2008) and multiple (nodewise) regressions (Meinshausen and
Buhlmann, 2006). Under suitable conditions, such approaches yield consistent (and sparse)
estimation in terms of graphical structure and fast convergence rates with respect to the operator and Frobenius norm for the covariance matrix and its inverse. However, many of the statistical models that have been considered still tended to be overly simplistic and not fully reflective of reality. For example, in neuroscience one must take into account temporal correlations as well as spatial correlations, which reflect the connectivity that is formed by the neural pathways. Yet, the line of high dimensional statistical literature has primarily focused on estimating linear or graphical models with independent and identically distributed samples. In the case of graphical models, the data matrix is usually assumed to have independent rows or columns that follow the same distribution. The independence assumptions substantially simplify mathematical derivations but they tend to be very restrictive.

In this talk, I will highlight some recent progress building upon matrix and tensor statistical models and convex and nonconvex optimization goals that address this issue. Statistical consistency and rates of convergence are established for several models and methods for precision and inverse covariance estimation.

Part of this talk is based on joint work with Michael Hornstein, Kristjan Greenewald, Roger Fan, Kerby Shedden, and Alfred Hero.

Video Recording