Fall 2016

Bandits and Agents: How to Incentivize Exploration?

Thursday, September 22nd, 2016 2:00 pm2:30 pm

Add to Calendar


Calvin Lab Auditorium 

Individual decision-makers consume information revealed by the previous decision makers, and produce information that may help in future decisions. This phenomenon is common in a wide range of scenarios in the Internet economy, as well as elsewhere, such as medical decisions. Each decision maker would individually prefer to exploit: select an action with the highest expected reward given her current information. At the same time, each decision maker would prefer previous decision makes to explore, producing information about the rewards of various actions. A social planner, by means of carefully designed information disclosure, can incentivize the agents to balance the exploration and exploitation so as to maximize social welfare.
We formulate this problem as a multi-arm bandit problem (and various generalizations thereof) under incentive-compatibility constraints induced by agents' Bayesian priors. We design a Bayesian incentive-compatible bandit algorithm for the social planner with asymptotically optimal regret. Further, we provide a black-box reduction from an arbitrary multi-arm bandit algorithm to an incentive-compatible one, with only a constant multiplicative increase in regret. This reduction works for very general bandit setting that incorporate contexts and arbitrary partial feedback.
Joint work with Yishay Mansour (Tel Aviv University and MSR Israel) and Vasilis Syrgkanis (MSR-NE).