Monday, November 16th, 2015

9:30 am10:10 am

I’ll discuss how kidney exchange has changed the behavior of big transplant centers in ways that have in turn changed the organization of kidney exchange and the algorithms that it deploys to create chains of transplants. 

11:20 am12:00 pm
We study a central economic problem for peer-to-peer online marketplaces: how to create successful matches when demand and supply are highly variable. To do this, we develop a parsimonious model of a frictional matching market for services, which lets us derive the elasticity of labor demand and supply, the split of surplus between buyers and sellers, and the efficiency with which requests and offers for services are successfully matched. We estimate the model using data from TaskRabbit, a rapidly expanding platform for domestic tasks, and report three main findings. First, supply is highly elastic: in periods when demand doubles, sellers work almost twice as hard, prices hardly increase and the probability of requested tasks being matched only slightly falls. Second, we estimate average gains from each trade to be $37. Because of the matching frictions and search costs needed to find potential matches, the ex-ante gains are more modest, but are maximized by the elastic labor supply: if the number of hours worked were held constant, there would be 15 percent fewer matches in equilibrium. Third, we find that platform success varies greatly across cities. The cities which grow fast in the number of users are also those where the market fundamentals promote efficient matching of buyers and sellers. This heterogeneity in matching efficiency is not attributable to scale economies, but is instead related to two measures of market thickness: geographic density (buyers and sellers living close together), and level of task standardization (buyers requesting homogeneous tasks). 
(joint with Zoë Cullen)
1:30 pm2:10 pm

In this presentation we would see some applications of game-theory and economics in designing business models.

2:10 pm2:50 pm

Reputation mechanisms used by platform markets suffer from two problems. First, buyers may draw conclusions about the quality of the platform from single transactions, causing a reputational externality. Second, reputation measures may be coarse or biased, preventing buyers from making proper inferences. We document these problems using eBay data and claim that platforms can benefit from identifying and promoting higher quality sellers. Using an unobservable measure of seller quality we demonstrate the benefits of our approach through a large-scale controlled experiment. Highlighting the importance of reputational externalities, we chart an agenda that aims to create more realistic models of platform markets.

3:20 pm4:00 pm

A host of new platforms use dynamic prices as a key tool to balance supply and demand.  Dynamic prices have at least two potential effects on the market: On the demand side, they allocate resources to those with the highest willingness to pay, throttling demand.  On the supply side, they signal to potential suppliers that this is a time of high demand, potentially leading to an increase of services supplied.  This latter effect distinguishes dynamic pricing in this context from our traditional notions of intertemporal price discrimination.  We study the case of Uber, a platform that matches riders to drivers and uses ``surge pricing'' during periods of high demand.  We present stylized facts consistent with both demand throttling and supply-side increases. We then use a structural model to decompose these effects and consider a counterfactual world with no surge pricing, quantifying the resulting inefficiencies.

Tuesday, November 17th, 2015

9:30 am10:10 am
We model an online display advertising environment in which “performance” advertisers can measure the value of individual impressions, whereas “brand” advertisers cannot. If advertiser
values for ad opportunities are positively correlated, second-price auctions for impressions can be very inefficient. Bayesian-optimal auctions are complex, introduce incentives for false-name bidding, and disproportionately allocate low-quality impressions to brand advertisers. We introduce “modified second bid” auctions as the unique auctions that overcome these disadvantages.
 
When advertiser match values are drawn independently from heavy tailed distributions, a modified second bid auction captures at least 94.8% of the first-best expected value. In that setting and similar ones, the benefits of switching from an ordinary second-price auction to the modified second bid auction may be large, and the cost of defending against shill bidding and adverse selection may be low.
10:40 am11:20 am

I will summarize several papers on machine learning in an exchange environment. The use at which ML is put matters, mainly through the errors.

11:20 am12:00 pm
In 2002, Eric Schmidt invited me to spend a year at Google.  I asked him what I should work on and he said "why don't you take a look at this ad auction --- I think it might make us a little money."  
 
I followed his suggestion and created a model of the Google Ad Auction.  This talk will be a bit about the history of that work, but mostly about how we used the auction model for subsequent decisions over the next few years.
 
The intent is to convey how the theory influenced the practice in this particular domain.
1:30 pm2:10 pm

Capacity or budget planning is an important component of any online ad serving platform. This has given rise to a rich and exciting line of work in online algorithms, and one of the most successful marriages of theory and practice. The practical aspects have directly influenced the theory and the theory has had significant impact on the design of modern advertising systems. The talk will give an overview of this interaction and some recent results on a very general problem called the online convex programming problem.

2:40 pm4:00 pm

With over 1 million daily active users and over 2.5 million advertisers, Facebook runs one of the world’s largest online advertising marketplaces. Advertisers can choose to bid and pay for ad impressions, for clicks, for conversions or for a combination. The ranking and pricing of ads depends on the predicted probability of clicks and conversions. In this talk we give a brief overview of the algorithmic problems underlying the Facebook ads auction. We share some lessons learnt from building the scalable machine learning platform we use to tackle the prediction problem, and discuss challenges in optimization and mechanism design.

Wednesday, November 18th, 2015

9:30 am10:10 am
Many systems involve the allocation of rewards for achievements, and these rewards produce a set of incentives that in turn guide behavior. Such effects are visible in many domains from everyday life, and they are increasingly forming a designed aspect of participatory on-line sites through the use of badges and other reward systems. 
 
We consider several aspects of the interaction between rewards and incentives in the context of collective effort, including a framework for reasoning about on-line user activity in the presence of milestones and badges, and some recent applications of this framework that we have pursued on sites including Stack Overflow and Coursera. We then consider the role of behavioral biases in reasoning about progress toward long-range goals, and to this end we propose a graph-theoretic framework for analyzing procrastination and other forms of planning behavior that are inconsistent over time. 
 
The talk includes joint work with Ashton Anderson, Dan Huttenlocher, Jure Leskovec, and Sigal Oren.
10:40 am11:20 am
While the Internet has dramatically changed many socio-economic interactions, there are still no widely used platforms for large-scale substantive discussion on complex societal issues, or for large scale participatory democracy. The lack of such platforms makes it hard to formulate algorithmic problems and design mechanisms which would, in turn, accelerate the development and deployment of these platforms. In this talk, we will describe some of our preliminary attempts at breaking this cycle by describing two systems that we have built and deployed for participatory democracy, and outlining some of the algorithmic and mechanism design questions that arise.
 
We will first present a voting platform for participatory budgeting. Most budgeting problems are knapsack problems, and hence most participatory budgeting problems can be reduced to knapsack voting. We will describe partial progress towards incentive-compatible and pragmatic voting rules for knapsack voting. We will also outline some research directions in the area of knapsack voting in particular, and structured decision making (where the votes and the decisions admit a simple formal description) in general.
 
We will then describe an experiment in unstructured decision making, where a characterization of the decision space is not known a priori. We will use this to outline general design principles for these systems, to highlight the need for "consensus markets" (akin to "prediction markets"), and present a candidate design that holds promise but doesn't quite work.
11:20 am12:00 pm
Peer prediction is the problem of information elicitation without verification. A problem that has plagued its application to practice has been the need for simple, "signal-only" schemes and for the need to knock-out undesirable, uninformative equilibria
 
Following Dasgupta and Ghosh (2013), we allow agents to report signals on overlapping sets of independent tasks. We characterize conditions for which the prior-free DG mechanism generalizes from binary- to multi-signal, while retaining strong truthfulness, so that truthful reporting yields the  maximum payoff across all equilibria (tied only with reporting permutations). We also obtain a greatly simplified proof of their result. 
 
In extensions we develop a mechanism that uses knowledge of the signal distribution to obtain a slightly weaker  incentive property in all domains: no strategy provides more payoff in equilibrium than truthful reporting, and truthful reporting is strictly better than any  uninformed strategy. In an analysis of peer-evaluation data from a large MOOC platform, we investigate how well student reports fit our models, and evaluate how the proposed scoring mechanisms would perform in practice. We find some surprises in the  distributions, but conclude that our modified mechanisms would do well. 
 
Time permitting I will describe directions for future work, including design under replicator dynamics and handling heterogeneity of taste amongst participants.
 
Joint work with Rafael Frongillo (U Colorado, Boulder) and Victor Shnayder (edX).
1:30 pm2:10 pm

The ability to computationally solve imperfect-information games has a myriad of future applications ranging from auctions, negotiations, and (cyber)security settings to guiding evolution and adaptation in medical domains. A dramatic scalability leap has occurred in the capability to solve such games over the last nine years, fueled in large part by the Annual Computer Poker Competition. I will discuss the key, domain-independent, techniques that enabled this leap, including automated abstraction techniques and approaches for mitigating the issues that they raise, new equilibrium-finding algorithms, safe opponent exploitation methods, techniques that use qualitative knowledge as an extra input, and endgame solving techniques. Time permitting, I will include new results on 1) developing the world’s best Heads-Up No-Limit Texas Hold'em poker program, 2) theory that enables abstraction that gives solution quality guarantees, 3) techniques for warm starting equilibrium finding, 4) simultaneous abstraction and equilibrium finding, 5) lossless tree pruning algorithms for incomplete-information settings, and 6) theory that improves prior best gradient-based equilibrium-finding algorithms. I will also cover the Brains vs AI competition that I recently organized where our AI, Claudico, challenged four of the top-10 human pros in Heads-Up No-Limit Texas Hold'em for 80,000 hands. The talk covers joint work with many co-authors, mostly Noam Brown, Sam Ganzfried, and Christian Kroer.

2:10 pm2:50 pm
Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical infrastructure, protecting endangered wildlife, forests and fisheries, suppressing urban crime or security in cyberspace. Unfortunately, limited security resources prevent full security coverage at all times; instead, we must optimize the use of limited security resources. To that end, our "security games" framework -- based on computational and behavioral game theory, while also incorporating elements of AI planning under uncertainty and machine learning -- has led to building and deployment of decision aids for security agencies in the US and around the world.  These decision aids are in use by agencies such as the US Coast Guard for protection of ports and ferry traffic,  the Federal Air Marshals Service for security of air traffic and by various police agencies for security of university campuses, airports and metro trains.  Moreover, recent work on "green security games" has led  our decision aids to be deployed, assisting NGOs in protection of wildlife; and "opportunistic crime security games" have focused on suppressing urban crime. Finally, our security-game-based startup, ARMORWAY, is further enabling the deployment of game-theoretic security resource optimization.  I will discuss our use-inspired research in security games that is leading to new research challenges, including algorithms for scaling up security games as well as for handling significant adversarial uncertainty and learning models of human adversary behaviors.
 
(*) Joint work with a number of current and former PhD students, postdocs all listed at teamcore.usc.edu/security
3:20 pm4:00 pm
In the past decade the Economics literature has developed a unified approach to inference and prediction of strategic environments. This approach starts with a full theoretical model that characterizes the preferences of agents and the mechanism of interaction between them.
 
The Econometrician then infers the components of the theoretical model from the data and provides prediction for the new settings by computing a new equilibrium of the strategic model. The issue with this approach is that the initial step where the Econometrician recovers the preferences of the agents from the data boils down to an inversion of a nonlinear mapping that can be discontinuous and even set-valued. In the talk I will discuss the properties of this mapping for classes of simple games and demonstrate that even in cases where this mapping is invertible, the recovered pre-image is very sensitive to the specification of the model. I will also discuss that such poor behavior of the solution will be preserved even when one uses strong conditions to ``regularize" this mapping. All these problems will be further amplified in the prediction.
 
The approach of set inference provides a robust alternative to traditional inference. In this approach the Econometrician recovers an entire set of preferences of the agents in the interactions that are compatible with many possible specifications of the theoretical model. However, computation of such sets is difficult even in simple games. In my talk I discuss a new approach to inference that is based on the idea of the price of anarchy in Koutsoupias and Papadimitriou (1999). The idea of the approach is to bypass the set inference for the primitives of the model and instead directly infer the outcomes of interest such as welfare or revenue in the game. However, unlike the standard price of anarchy which is based on the ``worst case scenario"-based prediction for the outcomes, we propose to consider the bounds that are informed by the distribution of the data. I talk about the new notion of the empirical price of anarchy that yields the 
price of anarchy over all preferences of agents that could have generated the observable distribution of the data. I then discuss some connections between our notion of the empirical price of anarchy and Economic literature on set inference.
 
The talk will be based on the the recent work [Khan and Nekipelov, 2014] and [Hoy, Nekipelov and Syrgkanis, 2015].

Thursday, November 19th, 2015

9:30 am10:10 am
Speaker: Andrew Lo, MIT
In a simple evolutionary model with one-period agents that make binary choices which determine their reproductive success, we show that natural selection is capable of generating several behaviors that have been observed in organisms ranging from ants to human subjects, including risk-sensitive foraging, risk aversion, loss aversion, probability matching, randomization, and diversification. Given an initial population of individuals, each assigned a purely arbitrary behavior with respect to a binary choice problem, and assuming that offspring behave identically to their parents, only those behaviors linked to reproductive success will survive, and less reproductively successful behaviors will disappear at exponential rates.  When the uncertainty in reproductive success is systematic, natural selection yields behaviors that may be individually sub-optimal but are optimal from the population perspective; when reproductive uncertainty is idiosyncratic, the individual and population perspectives coincide.  The simplicity and generality of our model imply that these derived behaviors are primitive and universal within and across species.  This framework also suggests a natural definition of intelligence---any behavior positively correlated with reproductive success---and links physiological and environmental constraints to the degree of intelligence that emerges, i.e., bounded rationality.
11:20 am12:00 pm
Traditional financial markets have undergone rapid technological change due to increased automation and the introduction of new mechanisms. Such changes have brought with them challenging new problems in algorithmic trading, many of which invite a machine learning approach. I will briefly survey several algorithmic trading problems, focusing on their novel ML and strategic aspects, including limiting market impact, dealing with censored data, and incorporating risk considerations.
1:30 pm2:10 pm

This talk will discuss the FCC’s “incentive auction”, which proposes to give television broadcasters an opportunity to sell their broadcast rights, to repack remaining broadcasters into a smaller block of spectrum, and to resell the freed airwaves to telecom companies. The stakes for this auction are huge—projected tens of billions of dollars in revenue for the government—justifying the design of a special-purpose descending-price auction mechanism, which I’ll discuss. An inner-loop problem in this mechanism is determining whether a given set of broadcasters can be repacked into a smaller block of spectrum while respecting radio interference constraints. This is an instance of a (worst-case intractable) graph coloring problem; however, stations’ broadcast locations and interference constraints are all known in advance. Early efforts to solve this problem considered mixed-integer programming formulations, but were unable to reliably solve realistic, national-scale problem instances. We advocate instead for the use of a SAT encoding, paired with a wide range of techniques: constraint graph decomposition; novel caching mechanisms that allow reuse of partial solutions from related, solved problems; algorithm configuration; algorithm portfolios; and the marriage of local-search and complete solver strategies. We show that our approach solves virtually all of a set of problems derived from auction simulations within the short time budget required in practice.

2:10 pm2:50 pm

We propose formal definitions of substitutes and complements for information in the context of decision making under uncertainty. A decision maker needs to take an action to maximize his expected utility. Two pieces of information are substitutes (complements) for the decision problem if the more the decision maker knows about one decreases (increases) his marginal value of learning the other. Interestingly, a special class of decision problems — prediction problems where the decision maker predicts the distribution of a random variable and is rewarded for his prediction according to a proper scoring rule — is complete for understanding the value of information, in the sense that every decision problem has an “equivalent” prediction problem where the value of information is the same. Using the understanding of informational substitutes and complements, we characterize the game-theoretic equilibria of strategic play in prediction markets for any market scoring rule.  

This talk is based on joint work with Bo Waggoner. 

 

Friday, November 20th, 2015

9:30 am10:10 am

No abstract available.

10:40 am11:20 am
Bitcoin's departure from traditional banking and money transmitting models make it one of the most disruptive and daring economic experiments of our time. Behind the scenes, the system which offers payments to nodes that support the currency, is ruled by the resulting incentives. I will overview the interaction between the protocol and different aspects of its underlying incentive scheme. In particular, I will present areas where improvements are needed, and where AGT can (hopefully) help.
(The talk will be self-contained, and no prior knowledge of the Bitcoin protocol will be assumed)
11:20 am12:00 pm

Ride-sharing platforms such as Lyft and Uber are among the fastest growing online marketplaces. A key feature of these platforms is the implementation of fine-grained, fast-timescale dynamic pricing --- where prices can react to instantaneous system state, and across very small geographic areas. In this talk, we explore the value of such fast-timescale dynamic pricing. To do so, we develop a new queueing model for ride-sharing platforms, which combines the stochastic dynamics of the platform's operations with strategic models of both passenger and driver behavior. Using this model, we demonstrate that dynamic pricing may not be better than the optimal quasi-static price in most settings. However, finding the optimal static price requires exact knowledge of system parameters; we show that dynamic pricing is much more robust to fluctuations in these parameters as compared to static prices. Thus, dynamic pricing does not necessarily buy more than static pricing, but in fact, it lets platforms realize the benefits of optimal static pricing with imperfect knowledge of system parameters.

Joint work with Siddhartha Banerjee (Cornell) and Carlos Riquelme (Stanford).