Abstract

Much of perception and cognition requires solving inverse problems: Given the photoreceptor activations in the retina, what is the structure of the external environment? Given a stream of words, what is the thought or concept being conveyed? Many of these inverse problems can essentially be formulated as factorization problems - e.g., factorizing form vs. motion from time-varying images, or factorizing the meaning and part of speech from a word. In Kanerva's high-dimensional computing framework, these problems can be posed as the factorization of a high-dimensional vector into its constituent components. 'Resonating circuits' provide an efficient means of solving this problem by iteratively estimating the factors via a set of coupled Hopfield networks. Here I shall present set of rigorous evaluations of its performance in comparison to alternative schemes such as multiplicative weights, alternating least squares, or map-seeking circuits. Resonator circuits vastly outperform these alternative methods in terms of operational capacity (the size of the search space from which a correct factorization can be found with high probability). Interestingly, all of these methods work by searching in superposition - that is, the estimate at any given point in time is a superposition of the possible factorizations, which works due to the properties of high-dimensional vector spaces. With Spencer Kent and Paxon Frady.