Abstract
Oftentimes, decisions involve multiple, possible conflicting rewards, or costs. For example, solving a problem faster may incur extra cost, or sacrifice safety. In cases like this, one possibility is to aim for decisions that maximize the value obtained from one of the reward functions, while keeping the value obtained from the other reward functions above some prespecified target values. Up to logarithmic factors, we resolve the optimal words-case sample complexity of finding solutions to such problems in the discounted MDP setting when a generative model of the MDP is available. While this is clearly an oversimplified problem, our analysis reveals an interesting gap between the sample complexity of this problem and the sample complexity of solving MDPs when the solver needs to return a solution which, with a prescribed probability, cannot violate the constraints. In the talk, I will explain the background of the problem, the origin of the gap, the algorithm that we know to achieve the near-optimal sample complexity, closing with some open questions. This is joint work with Sharan Vaswani and Lin F. Yang