Abstract

How would a computer scientist go about understanding the brain? We will consider how one would come up with a computational, mathematical model of the brain, and define a neural model, or NEMO, whose key ingredients are spiking neurons, random synapses and weights, local inhibition, and Hebbian plasticity (no backpropagation). Concepts are represented by interconnected co-firing assemblies of neurons that emerge organically from the dynamical system of its equations. It turns out it is possible to carry out complex operations on these concept representations, such as copying, merging, completion from small subsets, and sequence memorization. This opens up a "computer science of the brain", in which one can study algorithms and complexity in an unusual, but biologically-motivated, model of computation. On the "algorithms" side, I will present an algorithm for parsing, and discuss the minimally necessary model axioms needed to recognize various formal language classes (do we need more machinery to parse context-free grammars?) Then I will present a more recent model of the language organ in the baby brain that learns the meaning of words, as well as word order (an important aspect of syntax), from whole sentences with grounded input, giving us a much richer computational hypothesis of language in the brain.