Abstract

We use simulations of a simple learning model to predict how cooperation varies with treatment in the experimental play of the indefinitely repeated prisoner's dilemma. We suppose that learning and the game parameters only influence play in the initial round of each supergame, and that after these rounds play depends only on the outcome of the previous round. Using data from 17 papers, we find that our model predicts out-of-sample cooperation at least as well as more complicated models with more parameters and harder-to-interpret machine learning algorithms. Our results let us predict how cooperation rates change with longer experimental sessions, and help explain past findings on the role of strategic uncertainty.

Video Recording