
Abstract
Given i.i.d. samples from an unknown distribution $P$, the goal of distribution learning is to recover the parameters of a distribution that is close to $P$. We revisit this problem when the learner is given as advice the parameters of a distribution $Q$. Suppose $P$ is from the class of balanced product distributions over the n-dimensional Boolean hypercube. We show that there is an efficient algorithm to learn $P$ within TV distance $\epsilon$ with the sample complexity being $\tilde{O}(n^{1-c\eta}/\eps^2)$ for a constant $c>0$, if $\|\bm{p} - \bm{q}\|_1<\eps d^{(1-\eta)/2}$. Here, $\bm{p}$ and $\bm{q}$ are the mean vectors of $P$ and $Q$ respectively, and $\eta$ is unknown a priori. Without advice, it is known that the sample complexity must be linear in n. We show similar results for learning high-dimensional gaussians also.