Abstract
The pervasive use of algorithmic risk assessment complicates or undermines many bedrock democratic values, institutional standards of accountability, and constitutional principles. Who –or what-- determines our fitness for inclusion within newly-constituted technological communities? What has primacy in what contexts—our personhood as defined by juridical or political consensus? Our status as corporate “stakeholders” in business enterprise? Our “risk factors” as governed by probabilistic modeling? The answers to these questions affect research standards and IRBs, notions of privacy as well as public transparency, fiduciary relationships with professionals like lawyers and doctors, the right to be free from unwarranted search and seizure, the right not to testify against oneself, freedoms of assembly and association, and conventions against non- consensual human experimentation. Ungoverned, data-driven assortments are creating new forms of stigma, disparate impact and group discrimination. We will explore what process is due and the merits of “explainability/interpretability/transparency” as a response.