Incentivizing Peer Evaluation
Eliciting evaluations from individuals is important for many applications such as peer grading and recommender systems. However, it is challenging to motivate participants to perform evaluations carefully and to report them honestly, especially when doing so requires costly effort. One promising solution is peer prediction mechanisms, which reward participants by comparing a participant's report with those of his peers. Peer prediction mechanisms are designed to induce truth telling in equilibrium. More than a decade of literature has been devoted to improving such mechanisms. However, they also give rise to uninformative equilibria where participants do not reveal any useful information.
In this talk, I will begin by motivating the peer grading problem and introducing peer prediction mechanisms as a promising solution. Then I will describe our experimental results on some peer prediction mechanisms, showing that participants do not report truthfully when rewarded by such mechanisms, even though they are designed to induce truth telling in theory. Finally I will describe our ongoing research on circumventing this problem by checking participants' reports against ground truth.