Description

Expander Graph Architectures for High-Capacity Neural Memory

Memory networks in the brain must balance two competing demands. On the one hand, they should have high capacity to store the large numbers of stimuli an organism must remember over a lifetime. On the other hand, noise is ubiquitous in the brain and memory is typically retrieved from incomplete input. Thus, memories must be encoded with some redundancy, which reduces capacity. Current neural network models of memory storage and error correction manage this tradeoff poorly, yielding either suboptimal increases in capacity with network size or exhibiting poor robustness to noise. I will show that a canonical model of neural memory — the Hopfield network — can represent a number of states exponential in network size while robustly correcting errors in a finite fraction of nodes. This answers a long-standing question about whether neural networks can combine exponential capacity with noise robustness.

These robust exponential-capacity Hopfield networks are constructed using recent results in information theory, which show how error-correcting codes on large, sparse graphs (“expander graphs”) can leverage multiple weak constraints to produce near-optimal performance.  The network architectures exploit generic properties of large, distributed systems and map naturally to neural dynamics, suggesting appealing theoretical frameworks for understanding computation in the brain. Moreover, they suggest a computational explanation for the observed sparsity in neural responses in many cognitive brain areas. These results thus link powerful error-correcting frameworks to neuroscience, providing insight into principles that neurons might use and potentially offering new ways to interpret experimental data.

If you would like to give one of the weekly seminars on the Brain & Computation program, please fill out the survey at https://goo.gl/forms/zRlEuwjzdP6MsB6I2

All scheduled dates:

Upcoming

No Upcoming activities yet

Past