Abstract
Platform business models are built on an uneven foundation. Online behavioral advertising (OBA) drives revenue for companies like Facebook, Google, and, increasingly, Amazon, and a notice-and-choice regime of privacy self-management governs the flows of personal data that help those platforms dominate advertising markets. OBA and privacy self-management work together to structure platform businesses. We argue that the legal and ideological legitimacy of this structure requires that profoundly contradictory conceptions of human subjects—their behaviors, cognition, and rational capacities—be codified and enacted in law and industrial art. A rational liberal consumer agrees to the terms of data extraction and exploitation set by platforms and their advertising partners, with deficiencies in individuals’ rational choices remedied by consumer protection law. Inside the platform, however, algorithmic scoring and decision systems act upon a “user,†who is presumed to exist not as a coherent subject with a stable and ordered set of preferences, but rather as a set of ever-shifting and mutable patterns, correlations, and propensities. The promise of data-driven behavioral advertising, and thus the supposed value of platform-captured personal data, is that users’ habits, actions, and indeed their “rationality†can be discerned, predicted, and managed. In the eyes of the law, which could protect consumers against exposure and exploitation, individuals are autonomous and rational (or at least boundedly rational); they freely agree to terms of service and privacy policies that establish their relationship to a digital service and any third parties lurking in its back end. In certain cases, law will even defend against exploitation of consumers’ cognitive heuristics through transparency mandates to ensure the legitimacy of their ability to contract. But once that individual becomes a platform or service “userâ€, their legal status changes along with the estimation of their rational capacities. In the eyes of platforms and digital marketers who take advantage of policy allowances to deliver targeted advertising, consumers are predictably irrational, and their vulnerabilities can be identified or aggravated through data mining and design strategies. Behavioral marketing thus preserves a two-faced consumer: rational and empowered when submitting to tracking; vulnerable and predictable when that tracking leads toward the goal of influence or manipulation. This paper contributes to two currents in discussions about surveillance advertising and platform capitalism: it adds to the growing consensus that privacy self-management provides inadequate mechanisms for regulating corporate uses of personal data; and it strengthens the case that behavioral tracking and targeting should be reined in or banned outright. The paper makes these points by examining this contradiction in how the governance and practice of behavioral marketing construct consumers and what we call the platform user.