Abstract

When learning to make appropriate choices in different situations, humans can use multiple strategies in parallel, including working memory and reinforcement learning. Working memory allows very fast learning, but is cognitively effortful as well as limited in how much information can be retained, and for how long. Reinforcement learning has broader scope, but is slower and more incremental. Here, we investigate whether these two functions are independent in their computations and simply compete for choice, or if they interact at a deeper level. In multiple independent games, participants learned to select actions for varying numbers of new stimuli. When learning a low number of associations, performance was near optimal, indicating working memory use; however, with increasing number of items to learn, performance gradually decayed to a more incremental learning profile, as expected with from slower reinforcement learning mechanism. We will show evidence from fMRI, EEG and behavioral studies that the working memory process influences reinforcement learning computations, and specifically the update of estimated values with reward prediction errors. Indeed, this value update was surprisingly weakened in the easier conditions where performance was best. We will use computational modeling to show evidence that this is compatible with a competitive or cooperative interaction between working memory and reinforcement learning, but not with assuming that they are independent. We will then show preliminary evidence supporting the cooperative hypothesis, whereby working memory contributes expectations to the computation of the reward prediction error.

Session Chair: Christos Papadimitriou

Video Recording