Human-centered AI has become a common calling card for proponents of ML-based AI, whether as a genuine end or a means to societal acquiescence and quelling of resistance, the fate of the human in the AI-human dyad remains desperately in need of scrutiny. Holding fixed the human factor, all eyes focus on the dazzling array of AI-driven systems — from decision algorithms to robotic controls.
Shifting the spotlight to the human, the Cluster on AI and Humanity is organized around a growing concern about the future of humanity – individuals, groups, societal institutions and values – shaped, manipulated, and challenged by the imperatives of AI. The Cluster is organized around three themes: 1) human-in-the-loop; 2) human-AI (human-robot) complementarity; and 3) machine-readable (legible) humans. Motivated by the premise that no single discipline is equipped to address these questions independently, the cluster includes participants from the fields of law, social sciences, computer science, and humanities. Its aims are to learn from one another about promising past and ongoing research, and ultimately, to invigorate field-defining future research.
sympa [at] lists.simons.berkeley.edu (body: subscribe%20iml2020announcements%40lists.simons.berkeley.edu) (Click here to subscribe to our announcements email list for this program)sympa [at] lists.simons.berkeley.edu (body: subscribe%20ai2022announcements%40lists.simons.berkeley.edu) (.)
Helen Nissenbaum (Information Science and Digital Life Initiative, Cornell Tech; chair), Thomas Gilbert (Digital Life Initiative, Cornell Tech), Jake Goldenfein (University of Melbourne Law School), Connal Parsley (University of Kent Law School), Qian Yang (Information Science, Cornell University)