Abstract

This talk will describe my recent work with Cameron Musco and Merav Parter, on studying neural networks from the perspective of the field of Distributed Algorithms.   In our project, we aim both to obtain interesting, elegant theoretical results, and also to draw relevant biological conclusions.

We base our work on simple Stochastic Spiking Neural Network (SSN) models, in which probabilistic neural components are organized into weighted directed graphs and execute in a synchronized fashion.  Our model captures the spiking behavior observed in real neural networks and reflects the widely accepted notion that spike responses, and neural computation in general, are inherently stochastic.  In most of our work so far, we have considered static networks, but the model would allow us to also consider learning by means of weight adjustments.

Specifically, we consider the implementation of various algorithmic primitives using stochastic SNNs.  We first consider a basic symmetry-breaking task that has been well studied in the computational neuroscience community:  the Winner-Take-All  (WTA)  problem.  WTA is believed to serve as a basic building block for many other tasks, such as learning, pattern recognition, and clustering.  In a simple version of the problem, we are given neurons with identical firing rates, and want to select a distinguished one.  Our main contribution is the explicit construction of a simple and efficient WTA network containing only two inhibitory neurons; our construction uses the stochastic behavior of SNNs in an essential way.  We give a complete proof of correctness and analysis of convergence time, using distributed algorithms proof methods.  In related results, we give an optimization of the simple two-inhibitor network that achieves better convergence time at the cost of more inhibitory neurons.  We also give lower bound results that show inherent limitations on the convergence time achievable with small numbers of inhibitory neurons.

We also consider the use of stochastic behavior in neural algorithms for Similarity Testing (ST).   In this problem, the network is supposed to distinguish, with high reliability, between input vectors that are identical and input vectors that are significantly different.  We construct a compact stochastic network that solves the ST problem, based on randomly sampling positions in the vectors.  At the heart of our solution is the design of a compact and fast-converging neural Random Access Memory (neuro-RAM)  mechanism.

In this talk, I will describe our SNN model and our work on Winner-Take-All, in some detail.  I will also summarize our work on Similarity Testing, discuss some important general issues such as compositionality, and suggest directions for future work.

Video Recording