Abstract
The brain is an unparalleled learning machine, yet the principles that govern learning in the brain remain unclear. In this talk I will suggest that depth–the serial propagation of signals–may be a key principle sculpting learning dynamics in the brain and mind. To understand several consequences of depth, I will present mathematical analyses of the nonlinear dynamics of learning in a variety of simple solvable deep network models. Building from this theoretical work, I will turn to rodent systems neuroscience, showing that deep network dynamics can account for individually variable yet systematic transitions in strategy as mice learn a visual detection task over several weeks. Together, these results provide analytic insight into how the statistics of an environment can interact with nonlinear deep learning dynamics to structure evolving neural representations and behavior over learning.