Wednesday, July 13th, 2022

9:30 am9:55 am
Speaker: Tom Gilbert (Cornell University)

As AI systems are integrated into high stakes social domains, researchers now examine how to design and operate them in a safe and ethical manner. However, the criteria for identifying and diagnosing safety risks in complex social contexts remain unclear and contested. In this talk, I examine the vagueness in debates about the safety and ethical behavior of AI systems. I show how this vagueness cannot be resolved through mathematical formalism alone, instead requiring deliberation about the politics of development as well as the context of deployment. Drawing from a new sociotechnical lexicon, I redefine vagueness in terms of distinct design challenges at key stages in AI system development. The resulting framework of Hard Choices in Artificial Intelligence (HCAI) shows what is at stake when navigating these dilemmas by 1) specifying distinct forms of sociotechnical judgment that correspond to each stage; 2) suggesting mechanisms for feedback that ensure safety issues are exhaustively addressed. As such, HCAI contributes to a timely debate about the status of AI development in democratic societies, arguing that deliberation should be the goal of AI Safety, not just the procedure by which it is ensured.

9:55 am10:20 am
Speaker: Qian Yang (Cornell University)

Some claim AI is the "new electricity'' due to its growing ubiquity and significance. My work examines this vision from a human-centered design perspective: How can we situate AI's algorithmic advances in ways people perceive as valuable? How can we design interactions to improve the user-perceived AI fairness and agency? In this talk, I share one past project as a case study (namely, designing decision support systems for artificial heart implant candidate selection) and discuss lessons learned when AI systems hit the road.

10:20 am10:45 am
Speaker: Bernard Keenan (Birkbeck College)

In this paper I discuss examples of contemporary legal semantics associated with the ‘human in the loop’ using aspects of the work of Hans Blumenberg and Niklas Luhmann. Their work is relevant for at least three reasons. First, both thinkers utilised zettelkasten (slip-boxes), personally coded collections of notes and quotations written on index cards. Though inanimate, a zettelkasten has the capacity to surprise its creator with unexpected information and therefore, on Luhmann’s account, to partake in communication. This is authorship as the product of human observers capable of producing new information through observation. I suggest we can use this to reflexively observe the old cybernetic problem of the integration of psychic systems and information systems in decision-making. That distinction helps clarify the role of the observer of the legal subject of the ‘human’ differentiated from AI. Second, both closely studied the semantic shifts associated with humanity during the Enlightenment, arguing that rather than straightforwardly replacing God with homo faber, modernity radically decentred the figure of human in relation to both nature and technology. Humanity lacks ontological completeness; insofar as there is a ‘human nature’, it is one permanently compelled to reflexively adapt itself to a world/environment which it observes and from which it is constitutively differentiated. Technicization, on the other hand, is for both phenomenological; a historical mode of perception primarily understood as the coupling together of causal elements and symbols (regardless of the materiality or sophistication of those elements). The complexities of technological differentiation provoke responses both semantically but also organisationally and procedurally. This suggests that the figure of the human in relation to AI should be observed not just semantically but, crucially, for its organisational roles, internal and external to each organisation concerned. Third, both insist upon the contingency and uncontrollability of the world, and start with the axoim that their object of study – whether humanity or society – is improbable and contingent. Society is both autonomous and fragile. What is could be otherwise, or could not be at all. One consequence is that both throw into question simple causal connections between theory and practice. The link between theory and practice is historically recent, emerging in nineteenth century discourse linking science and technology. Practice can be theorised, but whether or not theory can be practiced is a question that theory cannot easily answer. If this is disquieting, it may also be liberating, for practitioners and theorists alike.

11:15 am11:45 am
Speaker: Lee McGuigan (University of North Carolina at Chapel Hill)

Platform business models are built on an uneven foundation. Online behavioral advertising (OBA) drives revenue for companies like Facebook, Google, and, increasingly, Amazon, and a notice-and-choice regime of privacy self-management governs the flows of personal data that help those platforms dominate advertising markets. OBA and privacy self-management work together to structure platform businesses. We argue that the legal and ideological legitimacy of this structure requires that profoundly contradictory conceptions of human subjects—their behaviors, cognition, and rational capacities—be codified and enacted in law and industrial art. A rational liberal consumer agrees to the terms of data extraction and exploitation set by platforms and their advertising partners, with deficiencies in individuals’ rational choices remedied by consumer protection law. Inside the platform, however, algorithmic scoring and decision systems act upon a “user,” who is presumed to exist not as a coherent subject with a stable and ordered set of preferences, but rather as a set of ever-shifting and mutable patterns, correlations, and propensities. The promise of data-driven behavioral advertising, and thus the supposed value of platform-captured personal data, is that users’ habits, actions, and indeed their “rationality” can be discerned, predicted, and managed. In the eyes of the law, which could protect consumers against exposure and exploitation, individuals are autonomous and rational (or at least boundedly rational); they freely agree to terms of service and privacy policies that establish their relationship to a digital service and any third parties lurking in its back end. In certain cases, law will even defend against exploitation of consumers’ cognitive heuristics through transparency mandates to ensure the legitimacy of their ability to contract. But once that individual becomes a platform or service “user”, their legal status changes along with the estimation of their rational capacities. In the eyes of platforms and digital marketers who take advantage of policy allowances to deliver targeted advertising, consumers are predictably irrational, and their vulnerabilities can be identified or aggravated through data mining and design strategies. Behavioral marketing thus preserves a two-faced consumer: rational and empowered when submitting to tracking; vulnerable and predictable when that tracking leads toward the goal of influence or manipulation. This paper contributes to two currents in discussions about surveillance advertising and platform capitalism: it adds to the growing consensus that privacy self-management provides inadequate mechanisms for regulating corporate uses of personal data; and it strengthens the case that behavioral tracking and targeting should be reined in or banned outright. The paper makes these points by examining this contradiction in how the governance and practice of behavioral marketing construct consumers and what we call the platform user.

11:45 am12:15 pm
Speaker: S.M. Amadae (University of Helsinki)

Western political theory celebrates individual autonomy and treats it as the basis for collective self-governance in democratic institutions.  Yet the rational choice revolution and privileging of consumers’ sovereignty, and its realization in market solutions, has offered numerous critiques of democratic will-formation and few remedies.  Concurrently with these developments ensuing during the decades after World War II, nuclear command and control (NC2) systems have de facto become the most extreme examples of exercising national sovereignty with life and death decision-making power over billions of humans. 

Nuclear command and control provides an example of the contemporary hybrid form of human intelligence mediated by, and integrated with, complex information and communication systems.  In the case of NC2, spanning carbon and silicon-based actors, Integrated Information Theory is a telling means to conceptually test the robustness of the system:  how vulnerable is the command and control system to introducing entropy into some partition of its physical substrate? 

This talk raises the following questions.  How does NC2 epitomize the exercise of national sovereignty?  What does it mean to think of intelligence as existing in hybrid systems of human and computational components?  Does the current role of NC2 invite imagining emergent properties in these hybrid “natural” and “artificial” intelligence systems?  Finally, does this trajectory of this material practice, with arguably the greatest cause-effect repertoire currently in existence, increasingly challenge efforts to implement collective forms of agency embodying ideals of individual autonomy and collective ethical accountability?

2:00 pm2:30 pm
Speaker: J.D. Zamfirescu (UC Berkeley)

What will ubiquitous deployment and accessible interfaces to large language models (GPT-3, LaMDA, T5, etc.) enable future humans to do, and how will these capabilities impact culture and society? In this talk we describe a framework we used to explore the first- and second-order effects of a few specific capabilities, including (1) fluent, directed text and speech generation and interpretation; (2) conversions between text and structured data; (3) automation and rapid response; and (4) simpler bespoke implementations of natural language interface (NLI)-based applications. As a speculative design exercise, we then play out some of these second-order effects on social institutions, internet and social media discourse, broadcast media, and research.

2:30 pm3:00 pm
Speaker: Christopher O'Neill (Monash University)

The figure of the Human-in-the-Loop has been much criticised in recent scholarship as an inadequate safeguard against both the potential and demonstrated harms of AI systems. This paper will take an historical approach to the figure, tracing its emergence in the mid-century NASA space program, and surveying the various transformations it has taken over the intervening decades, including at key inflection points such as the 1979 Three Mile Island incident. Drawing upon theories of 'afterwardsness' in the work of Jean Leplanche and Georges Didi-Huberman, this paper will argue that the Human-in-the-Loop is born as an anachronism, as a figure which is at once necessary and which yet appears too late to intervene 'adequately' in the complex industrial and post-industrial milieux in which it finds itself. What is at stake in tracing this genealogy of an anachronism is not to argue for the futility of human intervention in complex technical systems or to lament the passing of a more coherent technical agency. Instead, I suggest that it is this very 'out-of-timeliness' which is suggestive of the figure's potential to produce a creative reimagining of the terms of human autonomy in complex sociotechnical environments.

3:30 pm4:00 pm
Speaker: Alison Gopnik (UC Berkeley)

Recent work on large language models has focused on whether or not they are analogous to individual intelligent agents. I argue that instead we should think of them as cultural transmission technologies, by which accumulated information from other humans is passed on in a compact form. This makes them aanogous to other human technologies such as writing, print, libraries and internet search, and arguably language itself rather than as intelligent systems. I wil l present some data showing that such systems are very bad at solving simple reasoning tasks, but very good at passing on information, and discuss implications of this view for social and technological progress, for good or ill

4:00 pm4:30 pm
Speaker: Melanie Moses (University of New Mexico)

This talk will examine how algorithms and AI are often repurposed for applications never conceived of by the original designers. Humans repurpose algorithms in part because of the illusion of objectivity and universality of mathematical formulations. Algorithms then become embedded in complex systems, often without humans or institutions understanding the assumptions, intentions or meaning of their predictions. The talk will examine examples of this phenomenon in criminal justice and medical equity with suggestions for how to mitigate unintended consequences. I will also describe a forthcoming interdisciplinary course in algorithmic justice for law students, social scientists, and computer scientists to understand algorithmic bias.

Thursday, July 14th, 2022

9:30 am9:55 am
Speaker: Sarah Cen (Massachusetts Institute of Technology)

Data-driven algorithms have increasingly wide and deep reach into our lives, but the methods used to design and govern these algorithms are outdated. In this talk, we discuss two works, one of which focuses on algorithm design and the other on algorithm governance. The first work asks whether common assumptions used to design recommendation algorithms---such as those behind Yelp, Netflix, Facebook, and Grammarly---hold up in practice. In particular, recommendation platforms generally assume that users have fixed preferences and report their preferences truthfully. In reality, however, users can adapt and strategize, and failing to acknowledge the agency of users can hurt both the user and platform. In our work, we provide a game-theoretic perspective of recommendation and study the role of *trust* between a user and their platform. The second work studies exceptions in data-driven decision-making. Exceptions to a rule are decision subjects for whom the rule is unfit. Because averages are so fundamental to machine learning, data-driven exceptions are inevitable. The problem is: data-driven exceptions arise non-intuitively, making it difficult to identify and protect individuals who, through no fault of their own, fall through the cracks. Our work lays out a framework for legally protecting individuals subject to high-risk, data-driven decisions. 

9:55 am10:20 am
Speaker: Peter Hershock (East-West Center)

The informational synthesis of purpose-generating, carbon-based human intelligence with purpose-implementing, silicon-based computational intelligence is widely heralded as driving a 4th Industrial Revolution, but this revolution is as ontological as it is industrial, and as ethical as it is technical. This talk makes a case for seeing: 1] that AI ethics is being hampered by default presuppositions about ontologically individual moral agents, actions, and patients; 2] that taking the human risks and rewards of intelligent technology fully into account requires a critical distinction between tools and technologies; and 3] that the greatest threat intelligent technology poses to humanity is not a technological singularity, but an ethical singularity: a collapse of the opportunity space for practicing the evaluative art of human course correction as an ironic consequence of choice-mediated attention exploitation and consciousness hacking.

10:20 am10:45 am
Speaker: Smitha Milli (UC Berkeley)

Most recommendation engines today are based on predicting user engagement, e.g. predicting whether a user will click on an item or not. However, there is potentially a large gap between engagement signals and a desired notion of "value" that is worth optimizing for. We use the framework of measurement theory to (a) confront the designer with a normative question about what the designer values, (b) provide a general latent variable model approach that can be used to operationalize the target construct and directly optimize for it, and (c) guide the designer in evaluating and revising their operationalization. We implement our approach on the Twitter platform on millions of users. In line with established approaches to assessing the validity of measurements, we perform a qualitative evaluation of how well our model captures a desired notion of "value".

11:15 am11:45 am
Speaker: Kacper Sokol (RMIT University)

A myriad of approaches exists to help us peer inside automated decision-making systems based on artificial intelligence and machine learning algorithms. These tools and their insights, however, are socio-technological constructs themselves, hence subject to human biases and preferences as well as technical limitations. Under these conditions, how can we ensure that explanations are meaningful and fulfil their role by leading to understanding? In this talk I will demonstrate how different configurations of an explainability algorithm may impact the resulting insights and show the importance of the strategy employed to present them to the user, arguing in favour of a clear separation between the technical and social aspects of such tools.

11:45 am12:15 pm
Speaker: Megha Srivastava (Stanford University)

Recent works on shared autonomy and assistive-AI technologies, such as assistive robotic teleoperation, seek to model and help human users with limited ability in a fixed task. However, these approaches often fail to account for humans' ability to adapt and eventually learn how to execute a control task themselves. Furthermore, in applications where it may be desirable for a human to intervene, these methods may have inhibited their ability to learn how to succeed with full self-control.We focus on the problem of assistive teaching of motor control tasks such as parking a car or landing an aircraft. Despite their ubiquitous role in humans' daily activities and occupations, motor tasks are rarely taught in a uniform way due to their high complexity and variance. We propose an AI-assisted teaching algorithm that leverages skill discovery methods from reinforcement learning (RL) literature to (i) break down any motor control task into teachable skills, (ii) construct novel drill sequences, and (iii) individualize curricula to students with different capabilities. We show that AI-assisted teaching with skills improve student performance by around 40% compared to practicing full trajectories without assistance, and practicing with individualized drills can result in up to 25% further improvement.

2:00 pm2:30 pm
Speaker: Ben Green (University of Michigan)

As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic public review and approval before the agency can adopt the algorithm.

2:30 pm3:00 pm
Speaker: Connal Parsley (University of Kent)

In this talk, I introduce the project ‘The Future of Good Decisions: An evolutionary approach to human-AI government administrative decision-making’, recently funded by a UK fellowship scheme to run from 2022-2029. This project addresses the impasse between automated decision-making and core values of the rule of law (like fairness and transparency). Moving past today’s dominant question of whether machine learning technologies can be made to conform to legal criteria, or a new regulatory paradigm should be defined by data science, it asks: how can our ideas of good administrative decisions evolve for the coming age when humans and machines are indistinguishable? The project aims to articulate conceptions of decision quality that are appropriate to evolving technosocial ecologies; to integrate those conceptions with contemporary legal theory and jurisprudence; and to identify reform to administrative decision practices and related legal doctrines. This talk has three main aims. First, I will orient the project’s overall approach in relation to dominant strategies to protect administrative decision-making, including ‘human in the loop’. Second, I will outline its unique multi-method research design. Finally, I will explain the use of collaborative ‘Live Action Role Play’ in ‘prefiguring’ models of participatory deliberation, and in reflecting on value and quality in decision-making processes.

3:30 pm4:00 pm
Speaker: Andreea Bobu (UC Berkeley)

As robots are increasingly deployed in real-world scenarios, a key question is how to best transfer knowledge learned in one environment to another, where shifting constraints and human preferences render adaptation challenging. A central challenge remains that often, it is difficult (perhaps even impossible) to capture the full complexity of the deployment environment, and therefore the desired tasks, at training time. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform in one environment may be misaligned with the representation of the tasks that the robot has learned in another. In this talk, I postulate that because humans will be the ultimate evaluator of system success in the world, they are best suited to communicating the aspects of the tasks that matter to the robot. To this end, I will discuss our insight that effective learning from human input requires first explicitly learning good intermediate representations and then using those representations for solving downstream tasks.

Friday, July 15th, 2022

9:30 am10:00 am
Speaker: Serena Wang (UC Berkeley)

With the rapid proliferation of machine learning technologies in the education sphere, we address an urgent need to investigate whether the development of these machine learning technologies supports holistic education principles and goals. We present findings from a cross-disciplinary interview study of education researchers, investigating whether the stated or implied "social good" objectives of ML4Ed research papers are aligned with the ML problem formulation, objectives, and interpretation of results. Our findings shed light on two main alignment gaps: the formulation of an ML problem from education goals and the translation predictions to interventions.

10:00 am10:30 am
Speaker: Yuchen Cui (Stanford)

Human-in-the-loop machine learning is a widely adopted paradigm for instilling human knowledge in autonomous agents. Many design choices influence the efficiency and effectiveness of such interactive learning processes, particularly the interaction type through which the human teacher may provide feedback. While different interaction types (demonstrations, preferences, etc.) have been proposed and evaluated in the HIL-ML literature, there has been little discussion of how these compare or how they should be selected to best address a particular learning problem. In this talk, I will introduce an organizing principle for interactive machine learning that provides a way to analyze the effects of interaction types on human performance and training data. I will also identify open problems in understanding the effects of interaction types.

11:00 am11:30 am
Speaker: Thao Phan (Monash University)

In their influential introduction to racial formations, Michael Omi and Howard Winant define race as an essentially visual phenomenon. They state that “race is ocular in an irreducible way. Human bodies are visually read, understood, and narrated by means of symbolic meanings and associations. Phenotypic differences are not necessarily seen or understood in the same consistent manner across time and place, but they are nevertheless operating in specific social settings” (2015, 28). In studies of media and digital culture, moreover, processes of racialisation have most often been conceptualised as operating in visual regimes and studied using visual methods. But how do we study race when its formations are primarily figured through regimes of computation that rely on structures that are, for the most part, opaque? How do we account for processes of racialisation that operate through proxies and abstractions that figure racialized bodies not as single, coherent subjects, but as shifting clusters of data? In this paper, we discuss the challenges of researching race within algorithmic culture. We argue that previous formations of race that had been dependent on visual regimes are now giving way to structures that manage bodies through largely non-visible (or invisual) processes. In this new regime, race emerges as an epiphenomenon of processes of classifying and sorting — what we call “racial formations as data formations.” This discussion is significant because it raises new theoretical, methodological, and political questions for scholars of culture and media. This paper asks: how are we supposed to think, to identify, and to confront race and racialisation when they vanish into algorithmic systems that are beyond our perception? How do post-visual regimes disrupt the ways we have traditionally studied race? And, what methods might we use to render these processes tractable — if not “visible” — for the purposes of analysis and critique?

11:30 am12:00 pm
Speaker: Fabian Offert (UC Santa Barbara)

This talk discusses the concept of history inherent in so-called 'foundation models,' focusing on OpenAI’s CLIP model, a large-scale multimodal model that projects text and image data into a common embedding space. CLIP facilitates not only the automated labeling of images but also the automated production of images from labels (as one component of the DALL-E 2 composite model). Starting from Walter Benjamin’s concept of history, I argue that the spatialization of the past that occurs in models like CLIP invalidates the past’s potential to become history, to be productively reframed in a moment of crisis, to be both similar and dissimilar to the present at the same time