About

The conceptual affinity between the brain and the computer dates back to the mid-20th century, as the pioneering theoreticians of both fields (von Neumann, Turing, McCulloch, Pitts, Barlow) were beginning to leverage their interests in the other field to gain a better understanding of their own. Both fields have exploded in the past decades in terms of new knowledge, methodology, and prestige. But as they have become more technical and sophisticated, they have also grown further apart.

This program aimed to rekindle the affinity between these two fields, recognizing the enormous potential for what could be achieved in a unified research effort. It brought together a group of outstanding researchers in both brain science and theoretical computer science to attack some of the most important current problems in brain science that we believe particularly require joint scrutiny and collaboration.

The program focused on three research themes:

1. Open questions in brain science that have an important computational component. What are the roles of sparsity and overcompleteness in neural representation? How do neurons compute with spikes and dendritic nonlinearities? What can be learned from fine-grained brain connectivity (connectomics)? How do neuronal assemblies and synchrony emerge, and what role do they play in brain function?

2. Research problems in brain science where we expect computer scientists to take the lead. Recent efforts to map the anatomy, structure, and function of the brain are resulting in a deluge of data and considerable difficulties of interpretation, many of a computational nature. Machine learning plays an important role here, but what new conceptual and methodological advances are needed to outfit brain research? Computational theories are now emerging that can help us understand the dynamics of perception-action loops and cognitive functions such as language. However, much more work is needed to tie these to specific neuronal substrates and mechanisms.

3. Areas of computer science where we hope to see advances as a result of discoveries in neuroscience. New graph-theoretic concepts and algorithms could stem from discoveries about brain connectivity. Theories of learning could emerge from new insights about synaptic plasticity and how neural circuits self-organize and adapt in a stable manner. We also stand to gain in terms of new computing architectures, especially in learning how the brain computes with high-dimensional representations and with stochastic, low-power components.

This program was supported in part by the Kavli Foundation and the Paul G. Allen Family Foundation.

 

Organizers

Long-Term Participants (including Organizers)

Lena Ting (Emory University and Georgia Institute of Technology)
Fred Wolf (Max Planck Institute Göttingen)

Research Fellows

Visiting Graduate Students and Postdocs