Abstract

Peer prediction is the problem of information elicitation without verification. A problem that has plagued its application to practice has been the need for simple, "signal-only" schemes and for the need to knock-out undesirable, uninformative equilibria
 
Following Dasgupta and Ghosh (2013), we allow agents to report signals on overlapping sets of independent tasks. We characterize conditions for which the prior-free DG mechanism generalizes from binary- to multi-signal, while retaining strong truthfulness, so that truthful reporting yields the  maximum payoff across all equilibria (tied only with reporting permutations). We also obtain a greatly simplified proof of their result. 
 
In extensions we develop a mechanism that uses knowledge of the signal distribution to obtain a slightly weaker  incentive property in all domains: no strategy provides more payoff in equilibrium than truthful reporting, and truthful reporting is strictly better than any  uninformed strategy. In an analysis of peer-evaluation data from a large MOOC platform, we investigate how well student reports fit our models, and evaluate how the proposed scoring mechanisms would perform in practice. We find some surprises in the  distributions, but conclude that our modified mechanisms would do well. 
 
Time permitting I will describe directions for future work, including design under replicator dynamics and handling heterogeneity of taste amongst participants.
 
Joint work with Rafael Frongillo (U Colorado, Boulder) and Victor Shnayder (edX).

Video Recording