Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity, and raise the questions how neural circuits can maintain a stable computational function in spite of these continuously ongoing processes, and what functional uses these ongoing processes might have. I will present a general theoretical framework that allows us to answer these questions. Our results indicate that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task.
On a more general level, the novel framework allows us to analyze synaptic plasticity and network rewiring from the viewpoint of stochastic optimization and sampling methods. As an example, I will show that the framework is also applicable to artificial neural networks, which provides as a side-effect new and effective brain-inspired methods for deep learning.
Kappel, D., Habenschuss, S., Legenstein, R., & Maass, W. (2015). Network plasticity as Bayesian inference. PLoS computational biology, 11(11), e1004485.
Kappel, D., Legenstein, R., Habenschuss, S., Hsieh, M., & Maass, W. (2018). A dynamic connectome supports the emergence of stable computational function of neural circuits through reward-based learning. arXiv preprint arXiv:1704.04238.
Bellec, G., Kappel, D., Maass, W., & Legenstein, R. (2017). Deep Rewiring: Training very sparse deep networks. arXiv preprint arXiv:1711.05136.