Abstract

We investigate neural circuits in the exacting setting that (i) the acquisition of a piece of knowledge can occur from a single interaction, (ii) the result of each such interaction is a rapidly evaluatable subcircuit, (iii) hundreds of thousands of such subcircuits can be acquired in sequence without substantially degrading the earlier ones, and (iv) recall can be in the form of a rapid evaluation of a composition of subcircuits that have been so acquired at arbitrary different earlier times.

We develop a complexity theory, in terms of asymptotically matching upper and lower bounds, on the capacity of a neural network for executing, in this setting, the following action, which we call association: Each action sets up a subcircuit so that the excitation of a chosen set of neurons A will in future cause the excitation of another chosen set B. A succession of experiences, possibly over a lifetime, results in the realization of a complex set of subcircuits. The composability requirement constrains the model to ensure that, for each association as realized by a subcircuit, the excitation in the triggering set of neurons A is quantitatively similar to that in the triggered set B, and also that the unintended excitation in the rest of the system is negligible. These requirements ensure that chains of associations can be triggered.

We first analyze what we call the Basic Mechanism, which uses only direct connections between neurons in the triggering set A and the target set B. We consider random networks of n neurons with expected number d of connections to and from each. We show that in the composable context capacity growth is limited by d^2, a severe limitation if the network is sparse, as it is in cortex. We go on to study the Expansive Mechanism, that additionally uses intermediate relay neurons which have high synaptic weights. For this mechanism we show that the capacity can grow as dn, to within logarithmic factors. From these two results it follows that in the composable regime, for the realistic estimate of d being the square root of n, optimal superlinear capacity in terms of the neuron numbers can be realized by the Expansive Mechanism, instead of the linear order n to which the Basic Mechanism is limited.

Reference: L.G Valiant, Capacity of Neural Networks for Lifelong Learning of Composable Tasks,  Proc. 58th Annual IEEE Symposium on Foundations of Computer Science, October 15 - 17, 2017, Berkeley, California , 367-378 (2017).

Video Recording