Playlist: 21 videos

AI and Humanity

Remote video URL
0:27:40
Tom Gilbert (Cornell University)
https://simons.berkeley.edu/talks/tbd-434
AI and Humanity

As AI systems are integrated into high stakes social domains, researchers now examine how to design and operate them in a safe and ethical manner. However, the criteria for identifying and diagnosing safety risks in complex social contexts remain unclear and contested. In this talk, I examine the vagueness in debates about the safety and ethical behavior of AI systems. I show how this vagueness cannot be resolved through mathematical formalism alone, instead requiring deliberation about the politics of development as well as the context of deployment. Drawing from a new sociotechnical lexicon, I redefine vagueness in terms of distinct design challenges at key stages in AI system development. The resulting framework of Hard Choices in Artificial Intelligence (HCAI) shows what is at stake when navigating these dilemmas by 1) specifying distinct forms of sociotechnical judgment that correspond to each stage; 2) suggesting mechanisms for feedback that ensure safety issues are exhaustively addressed. As such, HCAI contributes to a timely debate about the status of AI development in democratic societies, arguing that deliberation should be the goal of AI Safety, not just the procedure by which it is ensured.
Visit talk page
Remote video URL
0:16:25
Qian Yang (Cornell University)
https://simons.berkeley.edu/talks/tbd-436
AI and Humanity

Some claim AI is the "new electricity'' due to its growing ubiquity and significance. My work examines this vision from a human-centered design perspective: How can we situate AI's algorithmic advances in ways people perceive as valuable? How can we design interactions to improve the user-perceived AI fairness and agency? In this talk, I share one past project as a case study (namely, designing decision support systems for artificial heart implant candidate selection) and discuss lessons learned when AI systems hit the road.
Visit talk page
Remote video URL
0:52:45
Bernard Keenan (Birkbeck College)
https://simons.berkeley.edu/talks/authorship-technicity-and-contingency
AI and Humanity

In this paper I discuss examples of contemporary legal semantics associated with the ‘human in the loop’ using aspects of the work of Hans Blumenberg and Niklas Luhmann. Their work is relevant for at least three reasons. First, both thinkers utilised zettelkasten (slip-boxes), personally coded collections of notes and quotations written on index cards. Though inanimate, a zettelkasten has the capacity to surprise its creator with unexpected information and therefore, on Luhmann’s account, to partake in communication. This is authorship as the product of human observers capable of producing new information through observation. I suggest we can use this to reflexively observe the old cybernetic problem of the integration of psychic systems and information systems in decision-making. That distinction helps clarify the role of the observer of the legal subject of the ‘human’ differentiated from AI. Second, both closely studied the semantic shifts associated with humanity during the Enlightenment, arguing that rather than straightforwardly replacing God with homo faber, modernity radically decentred the figure of human in relation to both nature and technology. Humanity lacks ontological completeness; insofar as there is a ‘human nature’, it is one permanently compelled to reflexively adapt itself to a world/environment which it observes and from which it is constitutively differentiated. Technicization, on the other hand, is for both phenomenological; a historical mode of perception primarily understood as the coupling together of causal elements and symbols (regardless of the materiality or sophistication of those elements). The complexities of technological differentiation provoke responses both semantically but also organisationally and procedurally. This suggests that the figure of the human in relation to AI should be observed not just semantically but, crucially, for its organisational roles, internal and external to each organisation concerned. Third, both insist upon the contingency and uncontrollability of the world, and start with the axoim that their object of study – whether humanity or society – is improbable and contingent. Society is both autonomous and fragile. What is could be otherwise, or could not be at all. One consequence is that both throw into question simple causal connections between theory and practice. The link between theory and practice is historically recent, emerging in nineteenth century discourse linking science and technology. Practice can be theorised, but whether or not theory can be practiced is a question that theory cannot easily answer. If this is disquieting, it may also be liberating, for practitioners and theorists alike.
Visit talk page
Remote video URL
0:15:55
Lee McGuigan (University of North Carolina at Chapel Hill)
https://simons.berkeley.edu/talks/lawaeutms-consumers-and-platform-users-how-competing-constructions-humans-legitimize-online
AI and Humanity

Platform business models are built on an uneven foundation. Online behavioral advertising (OBA) drives revenue for companies like Facebook, Google, and, increasingly, Amazon, and a notice-and-choice regime of privacy self-management governs the flows of personal data that help those platforms dominate advertising markets. OBA and privacy self-management work together to structure platform businesses. We argue that the legal and ideological legitimacy of this structure requires that profoundly contradictory conceptions of human subjects—their behaviors, cognition, and rational capacities—be codified and enacted in law and industrial art. A rational liberal consumer agrees to the terms of data extraction and exploitation set by platforms and their advertising partners, with deficiencies in individuals’ rational choices remedied by consumer protection law. Inside the platform, however, algorithmic scoring and decision systems act upon a “user,” who is presumed to exist not as a coherent subject with a stable and ordered set of preferences, but rather as a set of ever-shifting and mutable patterns, correlations, and propensities. The promise of data-driven behavioral advertising, and thus the supposed value of platform-captured personal data, is that users’ habits, actions, and indeed their “rationality” can be discerned, predicted, and managed. In the eyes of the law, which could protect consumers against exposure and exploitation, individuals are autonomous and rational (or at least boundedly rational); they freely agree to terms of service and privacy policies that establish their relationship to a digital service and any third parties lurking in its back end. In certain cases, law will even defend against exploitation of consumers’ cognitive heuristics through transparency mandates to ensure the legitimacy of their ability to contract. But once that individual becomes a platform or service “user”, their legal status changes along with the estimation of their rational capacities. In the eyes of platforms and digital marketers who take advantage of policy allowances to deliver targeted advertising, consumers are predictably irrational, and their vulnerabilities can be identified or aggravated through data mining and design strategies. Behavioral marketing thus preserves a two-faced consumer: rational and empowered when submitting to tracking; vulnerable and predictable when that tracking leads toward the goal of influence or manipulation. This paper contributes to two currents in discussions about surveillance advertising and platform capitalism: it adds to the growing consensus that privacy self-management provides inadequate mechanisms for regulating corporate uses of personal data; and it strengthens the case that behavioral tracking and targeting should be reined in or banned outright. The paper makes these points by examining this contradiction in how the governance and practice of behavioral marketing construct consumers and what we call the platform user.
Visit talk page
Remote video URL
0:46:5
S.M. Amadae (University of Helsinki)
https://simons.berkeley.edu/talks/tbd-445
AI and Humanity

Western political theory celebrates individual autonomy and treats it as the basis for collective self-governance in democratic institutions. Yet the rational choice revolution and privileging of consumers’ sovereignty, and its realization in market solutions, has offered numerous critiques of democratic will-formation and few remedies. Concurrently with these developments ensuing during the decades after World War II, nuclear command and control (NC2) systems have de facto become the most extreme examples of exercising national sovereignty with life and death decision-making power over billions of humans.

Nuclear command and control provides an example of the contemporary hybrid form of human intelligence mediated by, and integrated with, complex information and communication systems. In the case of NC2, spanning carbon and silicon-based actors, Integrated Information Theory is a telling means to conceptually test the robustness of the system: how vulnerable is the command and control system to introducing entropy into some partition of its physical substrate?

This talk raises the following questions. How does NC2 epitomize the exercise of national sovereignty? What does it mean to think of intelligence as existing in hybrid systems of human and computational components? Does the current role of NC2 invite imagining emergent properties in these hybrid “natural” and “artificial” intelligence systems? Finally, does this trajectory of this material practice, with arguably the greatest cause-effect repertoire currently in existence, increasingly challenge efforts to implement collective forms of agency embodying ideals of individual autonomy and collective ethical accountability?
Visit talk page
Remote video URL
0:15:30
J.D. Zamfirescu (UC Berkeley)
https://simons.berkeley.edu/talks/large-language-models-speculating-second-order-effects
AI and Humanity

What will ubiquitous deployment and accessible interfaces to large language models (GPT-3, LaMDA, T5, etc.) enable future humans to do, and how will these capabilities impact culture and society? In this talk we describe a framework we used to explore the first- and second-order effects of a few specific capabilities, including (1) fluent, directed text and speech generation and interpretation; (2) conversions between text and structured data; (3) automation and rapid response; and (4) simpler bespoke implementations of natural language interface (NLI)-based applications. As a speculative design exercise, we then play out some of these second-order effects on social institutions, internet and social media discourse, broadcast media, and research.
Visit talk page
Remote video URL
0:43:10
Christopher O'Neill (Monash University)
https://simons.berkeley.edu/talks/ironies-anachronism-afterwardsness-and-necessity-human-loop
AI and Humanity

The figure of the Human-in-the-Loop has been much criticised in recent scholarship as an inadequate safeguard against both the potential and demonstrated harms of AI systems. This paper will take an historical approach to the figure, tracing its emergence in the mid-century NASA space program, and surveying the various transformations it has taken over the intervening decades, including at key inflection points such as the 1979 Three Mile Island incident. Drawing upon theories of 'afterwardsness' in the work of Jean Leplanche and Georges Didi-Huberman, this paper will argue that the Human-in-the-Loop is born as an anachronism, as a figure which is at once necessary and which yet appears too late to intervene 'adequately' in the complex industrial and post-industrial milieux in which it finds itself. What is at stake in tracing this genealogy of an anachronism is not to argue for the futility of human intervention in complex technical systems or to lament the passing of a more coherent technical agency. Instead, I suggest that it is this very 'out-of-timeliness' which is suggestive of the figure's potential to produce a creative reimagining of the terms of human autonomy in complex sociotechnical environments.
Visit talk page
Remote video URL
0:15:57
Alison Gopnik (UC Berkeley)
https://simons.berkeley.edu/talks/large-language-models-cultural-technology
AI and Humanity

Recent work on large language models has focused on whether or not they are analogous to individual intelligent agents. I argue that instead we should think of them as cultural transmission technologies, by which accumulated information from other humans is passed on in a compact form. This makes them aanogous to other human technologies such as writing, print, libraries and internet search, and arguably language itself rather than as intelligent systems. I wil l present some data showing that such systems are very bad at solving simple reasoning tasks, but very good at passing on information, and discuss implications of this view for social and technological progress, for good or ill
Visit talk page
Remote video URL
0:47:26
Melanie Moses (University of New Mexico)
https://simons.berkeley.edu/talks/unintended-consequences-repurposed-ai
AI and Humanity

This talk will examine how algorithms and AI are often repurposed for applications never conceived of by the original designers. Humans repurpose algorithms in part because of the illusion of objectivity and universality of mathematical formulations. Algorithms then become embedded in complex systems, often without humans or institutions understanding the assumptions, intentions or meaning of their predictions. The talk will examine examples of this phenomenon in criminal justice and medical equity with suggestions for how to mitigate unintended consequences. I will also describe a forthcoming interdisciplinary course in algorithmic justice for law students, social scientists, and computer scientists to understand algorithmic bias.
Visit talk page
Remote video URL
0:17:16
Sarah Cen (Massachusetts Institute of Technology)
https://simons.berkeley.edu/talks/design-and-governance-human-facing-algorithms
AI and Humanity

Data-driven algorithms have increasingly wide and deep reach into our lives, but the methods used to design and govern these algorithms are outdated. For example, most data-driven algorithms assume that humans report information about themselves truthfully, but this assumption rarely holds in human-facing applications, which has significant implications on the algorithms and their performance because they use the information humans provide as training data. Similarly, current methods for auditing data-driven algorithms are often too brittle to apply as state-of-the-art algorithms evolve. In this talk, we'll explore data-driven algorithms along two axes: (1) designing vs. governing data-driven algorithms, and (2) whether they are used for repeated vs. one-off decisions. We'll examine the design of algorithms for repeated decisions using recommender systems (e.g., Yelp, Facebook, Google) as a case study, focusing on the role of *trust* between users and recommenders. We'll examine the governance of algorithms for repeated decisions by looking at how to *audit* social media. Finally, we'll examine both the design and governance of one-off decisions by proposing a new legal right---the right to be an exception in data-driven decision-making---that tackles the problem of using averages (in almost every part of the algorithmic pipeline) to make high-risk decisions on *individuals*.
Visit talk page