Calvin Lab Room 116
This week's speakers will talk about Regret minimization and its connection with non-convex games and MPD Platform's Design Problem.
Title: Efficient Regret Minimization in Non-Convex Games [in person]
Abstract: We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework. Paper connected with this talk: Efficient Regret Minimization in Non-Convex Games
Bio: Binghui Peng is a third-year PhD student (Fall 2019 - present) in the computer science department of Columbia University, advised by Prof. Christos Papadimitriou and Prof. Xi Chen. He is broadly interested in theoretical computer science, especially its intersection with machine learning and game theory. In summer 2020, he worked in Google research NYC, hosted by Sara Ahmadian. Previously, he got his bachelor degree from Yao class, Tsinghua Univeristy, where he worked with Prof. Pingzhong Tang on game theory.
Title: The Platform Design Problem Abstract [zoom]
Abstract: On-line firms deploy suites of software platforms, where each platform is designed to interact with users during a certain activity, such as browsing, chatting, socializing, emailing, driving, etc. The economic and incentive structure of this exchange, as well as its algorithmic nature, have not been explored to our knowledge. We model this interaction as a Stackelberg game between a Designer and one or more Agents. We model an Agent as a Markov chain whose states are activities; we assume that the Agent's utility is a linear function of the steady-state distribution of this chain. The Designer may design a platform for each of these activities/states; if a platform is adopted by the Agent, the transition probabilities of the Markov chain are affected, and so is the objective of the Agent. The Designer's utility is a linear function of the steady state probabilities of the accessible states minus the development cost of the platforms. The underlying optimization problem of the Agent -- how to choose the states for which to adopt the platform -- is an MDP. If this MDP has a simple yet plausible structure (the transition probabilities from one state to another only depend on the target state and the recurrent probability of the current state) the Agent's problem can be solved by a greedy algorithm. The Designer's optimization problem (designing a custom suite for the Agent so as to optimize, through the Agent's optimum reaction, the Designer's revenue), is NP-hard to approximate within any finite ratio; however, the special case, while still NP-hard, has an FPTAS. These results generalize from a single Agent to a distribution of Agents with finite support, as well as to the setting where the Designer must find the best response to the existing strategies of other Designers. We discuss other implications of our results and directions of future research. Papers connected with this talk:
Bio: Kiran Vodrahalli is a 5th year Ph.D. student at Columbia University advised by Daniel Hsu and Alex Andoni. He previously received a B.A. in Mathematics and an M.S.E in Computer Science from Princeton University, where he was advised by Sanjeev Arora. For more, his website is https://kiranvodrahalli.