Playlist: 22 videos Boot Camps, Seminars, Courses, Tutorials

Data-Driven Decision Processes Boot Camp

Remote video URL
1:13:46
Shuchi Chawla (UT Austin)
https://simons.berkeley.edu/talks/competitive-analysis-meets-stochastic-input-secretary-problems-and-prophet-inequalities
Data-Driven Decision Processes Boot Camp

In this talk we will discuss online decision-making problems where the input is partly adversarial and partly stochastic. In keeping with the theme of competitive analysis, we will compare the performance of the online algorithm against a hindsight optimum that observes the entire input before making decisions. I will survey techniques and results for the two dominant paradigms for these settings, namely secretary problems and prophet inequalities.
Visit talk page
Remote video URL
1:13:35
Ellen Vitercik (Stanford)
https://simons.berkeley.edu/talks/machine-learning-algorithm-design
Data-Driven Decision Processes Boot Camp

In practice, algorithms are often run repeatedly on similar problems from a particular application domain. For example, shipping companies solve similar routing problems day after day, and although demand changes daily, the underlying problem structure (such as the road network) remains the same. A growing body of research has studied how to use data about the application domain together with machine learning to optimize an algorithm so that it has particularly strong performance on problems from that application. In this talk, I will give an overview of research that uses machine learning in the context of algorithm design. In particular, I will focus on research that provides theoretical guarantees on the performance of the resulting algorithm.
Visit talk page
Remote video URL
1:2:0
Laura Doval (Columbia University)
https://simons.berkeley.edu/talks/tbd-452
Data-Driven Decision Processes Boot Camp

To summarize the possible outcomes of a strategic interaction, after having specified the players’ actions and payoffs, an analyst makes assumptions about their information and the extensive form that ties all of these elements together. Conventional equilibrium concepts begin by fixing both and proceed to provide a set of solutions for that fully specified game. An analyst, however, may know the payoff-relevant data but not the players’ private information, nor the extensive form that governs their play. Alternatively, a designer may be able to build a mechanism from these ingredients. The talk will survey the information design literature in Economics, which provides solution concepts that either fix the extensive-form game, but not the information structure (Bayes' correlated equilibrium), or do not fix either (coordinated equilibrium). Along the way, we will discuss applications to Economics, Computer Science, and Operations.
Visit talk page
Remote video URL
0:58:35
Laura Doval (Columbia University)
https://simons.berkeley.edu/talks/tbd-455
Data-Driven Decision Processes Boot Camp

We develop a tool akin to the revelation principle for dynamic mechanism-selection games in which the designer can only commit to short-term mechanisms. We identify a canonical class of mechanisms, dubbed direct Blackwell mechanisms, rich enough to replicate the outcomes of any equilibrium in a mechanism-selection game between an uninformed designer and a privately informed agent. A cornerstone of our methodology is the idea that a mechanism should encode not only the rules that determine the allocation, but also the information the designer obtains from the interaction with the agent. Therefore, how much the designer learns, which is the key tension in design with limited commitment, becomesan explicit part of the design. Our result simplifies the search for the designer-optimaloutcome by reducing the agent’s behavior to a series of participation, truthtelling, and Bayes’ plausibility constraints the mechanisms must satisfy. We illustrate the result via novel applications to dynamic pricing of durable goods, infinite horizon bargaining, and product personalization under the threat of future price discrimination.
Visit talk page
Remote video URL
1:21:5
Michael Kim (UC Berkeley)
https://simons.berkeley.edu/talks/tbd-459
Data-Driven Decision Processes Boot Camp

Algorithms make predictions about people constantly. The spread of such prediction systems has raised concerns that algorithms may exhibit unfair discrimination, especially against individuals from marginalized groups. We overview a notion of algorithmic fairness called Multicalibration (Hebert-Johnson,K.,Reingold,Rothblum'18), which formalizes the goals of fair prediction through the lens of complexity theory. Multicalibration requires that algorithmic predictions be well-calibrated, not simply overall, but simultaneously over a rich collection of subpopulations. This ``multi-group'' approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual protections.

In this tutorial, we begin with multicalibration, discussing its fairness properties and how to learn a multicalibrated predictor. Then, we turn our attention to two more-recent and related investigations. Specifically, we highlight Omniprediction (Gopalan,Kalai,Reingold,Sharan,Wieder'22) which establishes the (implicit) loss minimization guarantees of multicalibration, and Outcome Indistinguishability (Dwork,K.,Reingold,Rothblum,Yona'21) which gives a characterization of multicalibration through the lens of computational indistinguishability. We show tight connections between each of these notions, demonstrating multicalibration's versatility as a modern learning paradigm.
Visit talk page
Remote video URL
1:27:45
Nika Haghtalab (UC Berkeley)
https://simons.berkeley.edu/talks/tbd-460
Data-Driven Decision Processes Boot Camp

Social and real-world considerations such as robustness, fairness, social welfare, and multi-agent tradeoffs have given rise to multi-distribution learning paradigms. In recent years, these paradigms have been studied by several disconnected communities and under different names, including collaborative learning, distributional robustness, and fair federated learning. In this short tutorial, I will highlight the importance of multi-distribution learning paradigms in general, introduce technical tools for addressing them, and discuss how these problems relate to classical and modern consideration in data driven processes.
Visit talk page
Remote video URL
0:57:16
Rakesh Vohra (UPenn)
https://simons.berkeley.edu/talks/tbd-461
Data-Driven Decision Processes Boot Camp
Visit talk page
Remote video URL
0:42:35
Negin Golrezaei (MIT)
https://simons.berkeley.edu/talks/tbd-463
Data-Driven Decision Processes Boot Camp

Motivated by online decision-making in time-varying combinatorial environments, we study the problem of transforming offline algorithms into their online counterparts. We focus on offline combinatorial problems that are amenable to a constant factor approximation using a greedy algorithm that is robust to local errors. For such problems, we provide a general framework that efficiently transforms offline robust greedy algorithms into online ones using Blackwell approachability. We show that the resulting online algorithms have O(T^{1/2}) (approximate) regret under the full information setting. We further introduce a bandit extension of Blackwell approachability that we call Bandit Blackwell approachability. We leverage this notion to transform greedy robust offline algorithms into a O(T^{2/3})(approximate) regret in the bandit setting. Demonstrating the flexibility of our framework, we apply our offline-to-online transformation to several problems at the intersection of revenue management, market design, and online optimization, including product ranking optimization in online platforms, reserve price optimization in auctions, and submodular maximization. We show that our transformation, when applied to these applications, leads to new regret bounds or improves the current known bounds.
Visit talk page
Remote video URL
0:34:6
Daniel Alabi (Columbia University)
https://simons.berkeley.edu/talks/blackwells-approachability-theorem-and-some-applications-optimization
Data-Driven Decision Processes Boot Camp

In this talk, I will begin with a very personal view of David Blackwell's contributions to science and society, especially how Blackwell has influenced me as a scholar and Black man in a quantitative field. Next, I will discuss the Blackwell-Rao Theorem (1945, 1947) and the optimality of sufficient statistics in terms of mean-squared-error. Finally, I will show connections to private statistical prediction and uncertainty quantification.
Visit talk page
Remote video URL
0:34:31
Bin Yu (UC Berkeley)
https://simons.berkeley.edu/talks/understanding-simplicity-and-decision-trees
Data-Driven Decision Processes Boot Camp

I knew David Blackwell as a professor and then as a friend and colleague, most importantly, as a wonderful human being. In this talk, I will recollect my interactions with David, and discuss his research philosophy to "understand" things through simplicity, elegance and insights. I will end with our recent work to improve decision trees and hope that David would like it due to its simplicity and usefulness in developing high-stake clinical decision rules.
Visit talk page