Rebuilding an Optimistic Vision for AI Policy

A close-up image of a pair of eyes and the nose bridge between them, all composed out of rows of magenta ones and zeros of various sizes.

by Serena Booth (Law and Society fellow, Spring 2025)

Recall November 6, 2024 — the day after the U.S. election. I was driving back to my home in Washington, DC, from Ohio with colleagues. We were exhausted: we’d been out on the campaign trail knocking on doors and clinging to the fragile hope that voters would turn out to deliver a mandate for worker- and consumer-first policies in the election, despite the headwinds of inflation and party disunity. We were also heartbroken. On that long drive home, we cycled through tears and laughter, grief and gallows humor. I was heartbroken not because of the rebuke to my political party, but because of the accompanying rebuke to scientists and expertise in government. 

Just hours earlier, I had imagined a very different future. As an AI researcher and policymaker, I had dreamed about landing my AI policy priorities in legislation: finally enacting whistleblower protections for AI developers, or even more potent whistleblower incentives. Updating the laws that govern an old form of AI — credit scores — to strictly measure repayment ability and not also the profitability of potential borrowers. Updating our antitrust laws for AI, where AI systems can more readily tacitly collude and maintain unstable pricing equilibriaPreventing the rise of robot bosses and algorithmic wage discrimination before these societal problems metastasize. Using AI to streamline permitting and mortgage processing with an eye to lowering rents, rather than letting corporations use AI to raise them. In short, I imagined harnessing AI as a predistributive technology — a tool to reduce inequality and to improve the conditions of work — and I viewed legislation as the vehicle to achieve this.

I arrived at the Simons Institute just two months later, still intensely grieving the loss of my imagined future. I poured my thoughts into a position paper to argue that consumer protection laws, which aim to protect consumers from unfair and deceptive business practices, are critical for the safe development of AI. I wrote about how laws that predate personal computing have been effectively used to limit the applications of AI, and how we should continue to lean into those old, technology-agnostic, pro-consumer laws. The reviewers rebutted: “All this consumer protection stuff sounds great, but what about the state of consumer protection under the new administration?” Of course they were right: How could we rely on the Consumer Financial Protection Bureau to apply its ethos of “no fancy tech exceptions” to oversee AI deployment when there is hardly a Consumer Financial Protection Bureau left?

Over the next few months at the Simons Institute, between hikes in the glorious California sunshine, forward-looking discussions with the UC Berkeley executive fellows in applied technology policy, and participating in the Institute community, I began to think about how I could rebuild my optimism for the future of AI policy. If my imagined future could not materialize through my own policymaking efforts, then perhaps I could help build new foundations. Over the course of my time at the Institute, I planned the inaugural Brown University CNTR AI Policy Summer School: a two-week program that brings together graduate and undergraduate students and postdocs from across the United States to learn how to bridge technical expertise with practical, hands-on policymaking. 

The summer school is not a partisan effort: students are drawn from across the nation to develop their own independent policy agendas and organize their own advocacy meetings with congressional staff. During the first day of the summer school, the new administration released its “AI Action Plan,” which provided a live and extremely influential case study to work with: students developed policy briefs — some directly responding to the AI Action Plan. Over the course of the two weeks, policymakers from both the former Biden and current Trump administrations spoke to the students and explained the many continuities and occasional discontinuities of AI policy priorities from this political transition. They also met with policymakers from across executive agencies and with think tanks from both the right and the left, gaining a firsthand view of how technical expertise is translated into policy. Students learned that policymaking is not some abstract machine churning away in Washington; it’s messy, human, and profoundly shaped by the voices that show up in the room — voices that must include experts.

What struck me most about the inaugural summer school class was its hunger to serve the nation. From more than 270 applicants, we selected just 16 exceptional students, each arriving with their own perspective on how AI policy should reflect AI expertise. Some were motivated by the inequities they saw algorithms reinforce in their own communities; others wanted to channel technical breakthroughs toward pressing national priorities. Some students hoped to confront the challenges of AI in education or in how AI is used to provide companionship. Others sought to steer the construction of new AI data centers toward renewable energy sources. Yet others were eager to shape U.S. policy on AI in the context of strategic competition with China. Together, these students learned about what careers in government look like in practice, where their expertise will be most valuable, and how to imagine futures worth fighting for.

Watching these students draft policy briefs, debate hard trade-offs, and walk the halls of Congress started to rebuild my optimistic vision. This reminded me that optimism is not naive: it is a deliberate act of choosing to believe that education and community can give us the tools to steer the development of AI with both appropriate caution and enthusiasm.

Serena Booth is an assistant professor of computer science at Brown University. She studies how people craft specifications for AI systems and robots, and how people understand what is learned from their specifications. Before joining Brown, she served as an AI policy advisor in the U.S. Senate on the Democratic staff of the Banking, Housing, and Urban Affairs Committee. In Spring 2025, she became a Simons Institute Law and Society fellow.

,