Abstract

Noise is a ubiquitous property of biological neural circuits—trial-to-trial variance in neural spike counts is often equal or greater in magnitude to the mean neural response. Noise has also been incorporated into artificial neural networks; for example, as a form of regularization. But the detailed impact of noise on neural computations and hidden layer representations remains poorly understood. In neuroscience, a primary challenge has been to accurately estimate the statistics of noise with limited trials. To address this, we introduce a statistical model that leverages smoothness in experimental paradigms to derive efficient estimates of trial-to-trial noise covariance. Furthermore, to compare the structure of noise between artificial and biological networks, we propose a novel measure of neural representational similarity for stochastic networks. Together, these analytic tools enable new lines of investigation into non-deterministic modes of neural network function.

Video Recording