Deirdre Mulligan (UC Berkeley)
At every level of government, officials contract for technical systems that employ machine learning?systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they, and the public, have little input into?or even knowledge about?their design, or how well that design aligns with public goals and values. In this talk I explain the ways that the decisions about goals, values, risk, certainty, and the elimination of case-by-case discretion inherent in machine-learning system design make policy; how the use of procurement to manage their adoption undermines appropriate attention to these embedded policies; and, draw on administrative law to argue that when system design embeds policies the government must use processes that address technocratic concerns about the informed application of expertise, and democratic concerns about political accountability. Specifically, I describe specific ways that the policy choices embedded in machine learning system design today fail the prohibition against arbitrary and capricious agency action absent a reasoned decision making process that both enlists the expertise necessary for reasoned deliberation about such choices and makes visible the political choices being made. I conclude by sketching out options for bringing necessary technical expertise and political visibility into government processes for adopting machine learning systems through a mix of institutional and engineering design solutions.