Playlist: 20 videos

Learning in the Presence of Strategic Behavior

Remote video URL
1:10:16
Christos Papadimitriou (Columbia University)
https://simons.berkeley.edu/talks/tbd-370
Learning in the Presence of Strategic Behavior
Visit talk page
Remote video URL
1:0:30
Andre Wibisono (Yale University)
https://simons.berkeley.edu/talks/tbd-372
Learning in the Presence of Strategic Behavior

We study the alternating mirror descent algorithm for a two-player constrained bilinear zero-sum game. We highlight the connection between alternating mirror descent as a symplectic discretization of a Hamiltonian flow in the dual space. We show a regret bound for alternating mirror descent under decreasing step size. We also study when we can show a regret bound with constant step size, similar to the unconstrained case (alternating gradient descent).
Visit talk page
Remote video URL
1:9:16
Annie Liang (Northwestern University)
https://simons.berkeley.edu/talks/analysis-alternating-mirror-descent-constrained-min-max-games
Learning in the Presence of Strategic Behavior

Abstract: Economists often estimate models using data from a particular setting, e.g. estimating risk preferences in a specific subject pool. Whether a model's predictions extrapolate well across settings depends on whether the model has learned generalizable structure. We provide a tractable formulation for this "out-of-domain" prediction problem, and define the transfer error of a model to be its performance on data from a new domain. We derive finite-sample forecast intervals that are guaranteed to cover realized transfer errors with a user-selected probability when domains are drawn iid from some population, and apply our approach to compare the transferability of economic models and black box algorithms for predicting certainty equivalents. While the black box algorithms outperform the economic models in traditional "within-domain" tests (i.e., when estimated and tested on data from the same domain), in this application models motivated by economic theory transfer more reliably than black-box machine learning methods do.
Visit talk page
Remote video URL
1:6:20
Avrim Blum (Toyota Technological Institute at Chicago)
https://simons.berkeley.edu/talks/tbd-373
Learning in the Presence of Strategic Behavior

Abstract: In this talk I will discuss a few lines of work involving learning, incentivizing improvement, and fairness in the presence of strategic behavior. In the first, we consider online linear classification: agents (who all wish to be classified as positive) arrive one at a time, observe the current decision rule, and then modify their observable features to get a positive classification if they can do so at a cost less than their value for being positively classified. A particular challenge here is that updates made by the learning algorithm will change how agents behave, and will do so in a non-convex and discontinuous manner. In the second line we consider the simpler offline (batch) setting, but with the twist that agents have both gaming and improvement actions, and the decision-maker would like to incentivize agents to become positive (e.g., to create more qualified loan applicants). In the third line, we consider a pure improvement setting, and examine several fairness objectives. This is joint work with Saba Ahmadi, Hedyeh Beyhaghi, and Keziah Naggita.
Visit talk page
Remote video URL
1:14:35
Jon Kleinberg (Cornell University)
https://simons.berkeley.edu/talks/tbd-374
Learning in the Presence of Strategic Behavior

Online platforms have a wealth of data, run countless experiments and use industrial-scale algorithms to optimize user experience. Despite this, many users seem to regret the time they spend on these platforms. One possible explanation is that incentives are misaligned: platforms are not optimizing for user happiness. We suggest the problem runs deeper, transcending the specific incentives of any particular platform, and instead stems from a mistaken foundational assumption. To understand what users want, platforms look at what users do. This is a kind of revealed-preference assumption that is ubiquitous in user models. Yet research has demonstrated, and personal experience affirms, that we often make choices in the moment that are inconsistent with what we actually want: we can choose mindlessly or myopically, behaviors that feel entirely familiar on online platforms.

In this work, we develop a model of media consumption where users have inconsistent preferences. We consider what happens when a platform that simply wants to maximize user utility is only able to observe behavioral data in the form of user engagement. Our model produces phenomena related to overconsumption that are familiar from everyday experience, but difficult to capture in traditional user interaction models. A key ingredient is a formulation for how platforms determine what to show users: they optimize over a large set of potential content (the content manifold) parametrized by underlying features of the content. We show how the relationship between engagement and utility depends on the structure of the content manifold, characterizing when engagement optimization leads to good utility outcomes. By linking these effects to abstractions of platform design choices, our model thus creates a theoretical framework and vocabulary in which to explore interactions between design, behavioral science, and social media.

This is joint work with Sendhil Mullainathan and Manish Raghavan.
Visit talk page
Remote video URL
1:8:10
Rafael Frongillo (University of Colorado Boulder)
https://simons.berkeley.edu/talks/tbd-375
Learning in the Presence of Strategic Behavior

Forecasting competitions, wherein forecasters submit predictions about future events or unseen data points, are an increasingly common way to gather information and identify experts. One of the most prominent competition platforms is Kaggle, which has run machine learning competitions with prizes up to 3 million USD. The most common approach to running such a competition is also the simplest: score each prediction given the outcome of each event (or data point), and pick the forecaster with the highest score as the winner. Perhaps surprisingly, this simple mechanism has poor incentives, especially when the number of events (data points) is small relative to the number of forecasters. Witkowski, et al. (2018) identified this problem and proposed a clever solution, the Event Lotteries Forecasting (ELF) mechanism. Unfortunately, to choose the best forecaster as the winner, ELF still requires a large number of events. This talk will overview the problem, and introduce a new mechanism which achieves robust incentives with far fewer events. Our approach borrows ideas from online machine learning; we will see how the same mechanism solves an open question for online learning from strategic experts.
Visit talk page
Remote video URL
1:11:20
Moritz Hardt (Max Planck Institute for Intelligent Systems)
https://simons.berkeley.edu/talks/how-not-run-forecasting-competition-incentives-and-efficiency
Learning in the Presence of Strategic Behavior

Algorithmic predictions steer markets, drive consumption, shape communities, and alter life trajectories. The theory and practice of machine learning, however, has long neglected the often invisible causal forces of prediction. A recent conceptual framework, called performative prediction, draws attention to the fundamental difference between learning from a population and steering a population through predictions. After covering some emerging insights on performative prediction, the lecture turns to an application of performativity to the question of power in digital economies. Traditional economic concepts struggle with identifying anti-competitive patterns in digital platforms not least due to the difficulty of defining the market. I will introduce the notion of performative power that sidesteps the complexity of market definition and directly measures how much a firm can benefit from steering consumer behavior. I end on a discussion of the normative implications of high performative power, its connections to measures of market power in economics, and its relationship to ongoing antitrust debates.

The talk is based on joint works with Meena Jagadeesan, Celestine Mendler-Dünner, Juan C. Perdomo, and Tijana Zrnic.
Visit talk page
Remote video URL
1:9:5
Nicole Immorlica (Microsoft Research)
https://simons.berkeley.edu/talks/invisible-hand-prediction
Learning in the Presence of Strategic Behavior

We study a communication game between a sender and receiver where the sender has access to a set of informative signals about a state of the world. The sender chooses one of her signals and communicates it to the receiver. We call this an ``anecdote''. The receiver takes an action, yielding a utility for both players. Sender and receiver both care about the state of the world but are also influenced by a personal preference so that their ideal actions differ. We characterize perfect Bayesian equilibria when the sender cannot commit to a particular communication scheme. In this setting the sender faces ``persuasion temptation'': she is tempted to select a more biased anecdote to influence the receiver's action. Anecdotes are still informative to the receiver but persuasion comes at the cost of precision. This gives rise to ``informational homophily'' where the receiver prefers to listen to like-minded senders because they provide higher-precision signals. We show that for fat-tailed anecdote distributions the receiver might even prefer to talk to poorly informed senders with aligned preferences rather than a knowledgeable expert whose preferences may differ from her own because the expert's knowledge also gives her likely access to highly biased anecdotes. We also show that under commitment differences in personal preferences no longer affect communication and the sender will generally report the most representative anecdote closest to the posterior mean for common distributions.
Visit talk page
Remote video URL
0:56:0
Fei Fang (Carnegie Mellon Univeresity)
https://simons.berkeley.edu/talks/tbd-376
Learning in the Presence of Strategic Behavior

Security game models have led to great success in modeling defender-attacker interaction or alike in security and environmental sustainability domains. Most of the earlier work in security games relies on mathematical programming-based algorithms to solve the game and compute the optimal strategy for the defender. However, for many problems that account for more practical aspects, the game model would be much more complex and mathematical programming-based methods are not applicable. In this talk, we introduce our work that leverages reinforcement learning to handle complex security games, including games with continuous action space, green security games with real-time information, and repeated games with unknown attacker types.
Visit talk page
Remote video URL
0:56:5
Vasilis Syrgkanis (Microsoft Research)
https://simons.berkeley.edu/talks/tbd-377
Learning in the Presence of Strategic Behavior

Given a sample of bids from independent auctions, this paper examines the question of inference on auction objects (like valuation distributions, welfare measures, etc) under weak assumptions on information. We leverage the re- cent contributions of Bergemann and Morris [2013] in the robust mechanism design literature that exploit the link between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in incomplete information games, to construct an econometrics framework that is computationally feasible and robust to assump- tions about information. Checking whether a particular valuation distribution belongs to the identified set is as simple as determining whether a linear program (LP) is feasible. This is the key characteristic of our framework. A similar LP can be used to learn about various welfare measures and policy counterfactuals. For inference and to summarize statistical uncertainty, we propose novel finite sample methods using tail inequalities that are used to construct confidence sets on identified sets. Monte Carlo experiments show adequate finite sample properties. We illustrate our approach by applying our methods to a data set from search Ad auctions and to data from OCS auctions.
Visit talk page