As important decisions about how we live with one another in society rely increasingly on algorithms, we must ask: do these algorithms adequately account for all the factors at play in the systems they model?
As Matt Weinberg of Princeton University recounted in an October seminar at the Simons Institute, this is an issue with Bitcoin, and with proof-of-work–based cryptocurrencies more generally.
The original 2008 white paper proposing Bitcoin, by the pseudonymous Satoshi Nakamoto, announced “a system for electronic transactions without relying on trust.” Nakamoto put forth a utopian vision of a decentralized system “robust in its unstructured simplicity.” That system grew quickly, fed by a combination of speculation and an ideological alignment with the libertarian consensus that had formed in Silicon Valley. One year after its 2009 launch, the market capitalization of Bitcoin in US dollars was $500 million; as of the time of writing, it is hovering just shy of $180 billion. In the span of just over a decade, an anonymous, theoretical proposal became a major economic force in the world.
Bitcoin was the first cryptocurrency based on a blockchain. Blockchains rely on solving computationally difficult mathematical problems in order to add new blocks to the chain. Mining is the process of solving those problems — Bitcoin miners attempt to solve the problems, and very roughly speaking, the first to successfully solve a problem in a given timestep is given a reward (in bitcoins) called the block reward. This is the incentive to mine: by doing the work (the computation involved in mining the block), miners are rewarded with the cryptocurrency itself. A mining pool is a centralized group whose members share their processing power as well as the rewards for finding new blocks in the blockchain.
Bitcoin’s runaway success obscures the fact that Bitcoin has failed in its central aim. It was intended to be decentralized. It is not.
Four mining pools account for a majority of blocks. The 10 largest pools account for about 90 percent of all blocks. Pools based in China, an authoritarian state, control about 80 percent of blocks. This means that a small group of miners could theoretically delete past transactions, or freeze users’ funds.1 Bitcoin has replaced one form of somewhat centralized control — the conventional banking and financial system — with another even more centralized one that lacks similar mechanisms of transparency, governance, and accountability. What’s more, as Arnosti and Weinberg wrote, “the currently-observed concentration of mining power is not a temporary aberration, but rather a natural result of the economic incentives established by Bitcoin and other ‘proof of work’ based protocols.”
Why is this? As Weinberg recounted, “Even small asymmetries in cost turn out to be really important.” It is a reality that electricity will be cheaper in some places than in others. But neither Nakamoto nor Bitcoin’s proponents anticipated that, as Weinberg puts it, “asymmetric costs completely dwarf marginal returns.” The surprising thing, he says, is how much they matter. What’s more, it didn’t take an incredibly novel insight to arrive at this conclusion: the formal model that Weinberg and Arnosti used to demonstrate this is identical to that used in a 1980 paper on rent-seeking. As they wrote, what they did was to couch these results in a cleaner fashion and present them to the theoretical computer science community.
Weinberg also discussed attacks on the idealized functioning of blockchain such as “selfish” mining, in which a miner does not announce a new block immediately, but rather waits, in hope of mining several blocks in a row. The correct functioning of Bitcoin depends on miners not hoarding blocks, in order that transactions can be efficiently processed. As Weinberg pointed out, Ittay Eyal and Emin Gün Sirer had shown that a majority of “honest” miners is not enough — a pool that commands between a quarter and one half of the resources in the system can obtain more than its fair share of revenue. As Eyal and Sirer wrote, if that happens, “rational miners will prefer to join the selfish miners, and the colluding group will increase in size until it becomes a majority. At this point, the Bitcoin system ceases to be a decentralized currency.”
Furthermore, Bitcoin provides two forms of incentives for miners — block rewards and transaction fees. The way the system is designed, the importance of the block reward will diminish over time. The assumption had been that the two forms of reward were interchangeable. But as Weinberg pointed out in a 2016 paper with colleagues from Princeton, once fixed block rewards diminish, the variance of awards from transaction fees becomes large. This means that selfish mining “can be made profitable for a miner with an arbitrarily low hash power share.” As Weinberg and Arnosti wrote, “while the Bitcoin protocol specifies how miners are supposed to behave, it cannot enforce this behavior.”
The obvious alternative are proof-of-stake protocols. However, in a 2018 paper, Weinberg, Jonah Brown-Cohen, Arvind Narayanan, and Christos-Alexandros Psomas pointed out a series of problems facing proof-of-stake protocols. The “vast majority of commercial and academic proposals” for proof-of-stake protocols suffer from one of two vulnerabilities that make them not incentive compatible. These vulnerabilities arise, they say, because proof-of-stake systems rely on sources of pseudorandomness that are endogenous to the currency.
In the Fall 2019 Simons Institute program on Online and Matching-Based Market Design, spectrum auctions were repeatedly brought up as the example par excellence of a mechanism design intervention that had achieved sought-after results and thus was worth emulating. But the success of this example is by no means uncontested.
As Philip Mirowski and Edward Nik-Khah recount in The Knowledge We Have Lost in Information, a critique of modern microeconomics, “It is demonstrably false that the spectrum auctions satisfied the congressional goals.”2 They continue:
Many business buying licenses defaulted on their down payments, leading to considerable “administrative delay” in re-awarding licenses. The lion’s share of licenses won by “small” and “entrepreneurial” businesses went to entities bankrolled by large telecoms... The auctions did not live up to their promise to promote “rapid deployment [in] rural areas.”…the allocation of licenses produced by the auctions proved to be unstable, as the industry has gone through a spate of mergers, acquisitions, and bankruptcies, ultimately leading to a high degree of license concentration.
Mirowski and Nik-Khah go on to say that, “this willingness to skew markets in favor of certain participants explains why, despite the failure to implement public policy, the FCC auctions are, as one participant noted, ‘a huge success for the auction theorists involved.’” They identified a paradox: markets are “preferred to bureaucracies on the grounds of transparency and efficiency; yet, only market designers could truly understand their setup and operation.”
A second example of clever market design yielding problematic results is the matching process in New York City and Boston schools. As George Packer wrote in a recent Atlantic article, the process has deep-seated flaws that lie outside the considerations accounted for in the algorithm:
Students in New York City public schools have to apply to middle school. They rank schools in their district, six or eight or a dozen of them, in order of preference, and the middle schools rank the students based on academic work and behavior. Then a Nobel Prize–winning algorithm matches each student with a school, and that’s almost always where the student has to go. The city’s middle schools are notoriously weak; in our district, just three had a reputation for being “good.” An education expert near us made a decent living by offering counseling sessions to panic-stricken families. The whole process seemed designed to raise the anxiety of 10-year-olds to the breaking point.
The anxiety of 10-year-olds was simply not modeled as part of the matching algorithm. And so it was not accounted for, even if it ought to be a central factor when designing a system intended to educate them. Similarly, the economic incentives of Bitcoin miners are central to the functioning of Bitcoin but were effectively unmodeled in the initial plans for the system. Weinberg’s critique is different from Mirowski’s and Nik-Khah’s, as well from Packer’s, in that it is mathematical in nature. Weinberg arrives at his critique through enlarging the scope of phenomena that can be mathematically modeled. But this is not always possible — some meaningful things will ever remain beyond the scope of quantification.
The work of Weinberg and his colleagues should serve as a caution that even if the anxiety of 10-year-olds cannot be meaningfully incorporated into a quantitative model, it is nevertheless important. As the Internet has become ubiquitous in both space and time, and as algorithmically driven decisions have become of ever more central importance to our lives, computer scientists are playing a larger and larger role in structuring society. Robust interdisciplinary dialogue with humanists, social scientists, and philosophers will be essential if the algorithms that increasingly impact human society are to be of genuine benefit.
Thank you to the editors for giving us an opportunity to share another perspective on the above article. We enjoyed reading it and are happy to see these ideas represented in the newsletter! As the author points out, rapid development in novel algorithmic domains, such as cryptocurrencies, often arises without careful consideration for how humans will interact with these algorithms. We agree with the author that these issues are not just a theoretical exercise and should be seriously considered by cryptocurrency designers. But we do disagree with a few of the implications and wanted to share a different perspective.
Whether or not Bitcoin has “failed in its central aim” of decentralization is unclear, depending on what exactly one considers to be the central aim. For example: anyone can create an address and receive bitcoins without needing to talk to a central authority. Anyone can download and verify the entire transaction history, even if they don’t trust the entity they downloaded it from. And even if 80 percent of the mining power colluded maliciously, they are limited in what they can do; for example, they can’t confiscate a user’s coins. In these ways, Bitcoin is succeeding. On the other hand, it’s true that Bitcoin mining is not as decentralized as Nakamoto originally hoped.
Currently, there is not much discrepancy between the theory and practice of Bitcoin’s game theory. Specifically, independent works of Kiayias, Koutsoupias, Kyropoulou, and Tselekounis and Sapirshtein, Sompolinsky, and Zohar prove that while indeed a 34 percent miner can profit by selfish mining under Bitcoin’s block reward, a ~31 percent miner cannot. Currently, Bitcoin mining is roughly as concentrated as the author claims, and our cited theoretical analysis predicts this. In both cases, it’s therefore important to note that what we see is what the theory predicts, even though those models are stylized and imperfect!
The primary purpose of our research is not to scold past designers but rather to guide future designers. For example, cryptocurrency designers face numerous decisions: proof of work vs. proof of stake, block reward vs. transaction fees, longest chain vs. Byzantine consensus, etc. Even when models are too stylized to produce accurate quantitative predictions, research still produces meaningful insights that help practitioners understand the trade-offs of different approaches.
There are countless examples of algorithms failing in practice because the designers did not consider at all how humans would interact. By now, there are also countless examples of algorithms significantly outperforming their ad hoc predecessors when accounting for human interaction — the easiest-to-cite example is perhaps the National Resident Matching Program (NRMP), which algorithmically matches medical school graduates to residencies using Gale and Shapley’s Nobel Prize–winning deferred acceptance algorithm. This algorithm is of course not flawless in all instantiations, and there are also more dire examples of algorithms failing in practice despite designers making a genuine effort to consider how humans will interact. But there are certainly success stories to go with the horrors.
Still, we agree with the author that our field can do better if we really want to impact practice. In particular, we agree that we must seriously value sustained interaction with domain experts and social scientists. Put another way, if it is indeed the case that the anxiety of 10-year-olds is markedly higher while participating in an algorithmic school match than its ad hoc predecessor, we should understand why and propose mitigating measures (but we should not throw out algorithmic modeling altogether).
Arvind Narayanan, Joseph Bonneau, Edward Felten, Andrew Miller, and Steven Goldfeder, Bitcoin and Cryptocurrency Technologies (Princeton: Princeton University Press, 2016).↩
Mirowski, Philip and Edward M. Nik-Khah, The Knowledge We Have Lost in Information: The History of Information in Modern Economics (New York: Oxford University Press, 2017), 215.↩
- Letter from the Director, May 2020
- Perception as Inference: The Brain and Computation | Theory Shorts
- What do Algorithmic Fairness and COVID-19 Case-Severity Prediction Have in Common? | Simons Institute Polylogues
- Research Resilience Under Quarantine | Simons Institute Polylogues
- From the Inside: Online and Matching-Based Market Design
- From the Inside: Proofs, Consensus, and Decentralizing Society
- Research Vignette: Geometry of Polynomials
- This Spring at the Simons Institute