Interestingly, the MIT data scientists interviewed by anthropologist Kathleen Richardson were conscious of race, class and gender, and none wanted to reproduce these normative stereotypes in the robots they created … [They] avoided racially marking the “skin” of their creations … preferred to keep their machines genderless, and did not speak in class-marked categories of their robots as “servants” or “workers,” but companions, friends and children.
Richardson contrasts her findings to that of anthropologist Stefan Helmreich, whose pioneering study of artificial life in the 1990s depicts researchers as “ignorant of normative models of sex, race, gender and class that are refigured in the computer simulations of artificial life.”46 But perhaps the contrast is overdrawn, given that colorblind, gender-neutral, and class-avoidant approaches to tech development are another avenue for coding inequity. If data scientists do indeed treat their robots like children, as Richardson describes, then I propose a race-conscious approach to parenting artificial life – one that does not feign colorblindness. But where should we start?
Automating Anti-Blackness As it happens, the term “stereotype” offers a useful entry point for thinking about the default settings of technology and society. It first referred to a practice in the printing trade whereby a solid plate called a “stereo” (from the ancient Greek adjective stereos, “firm,” “solid”) was used to make copies. The duplicate was called a “stereotype.”
The term evolved; in 1850 it designated an “image perpetuated without change” and in 1922 was taken up in its contemporary iteration, to refer to shorthand attributes and beliefs about different groups. The etymology of this term, which is so prominent in everyday conceptions of racism, urges a more sustained investigation of the interconnections between technical and social systems.
To be sure, the explicit codification of racial stereotypes in computer systems is only one form of discriminatory design. Employers resort to credit scores to decide whether to hire someone, companies use algorithms to tailor online advertisements to prospective customers, judges employ automated risk assessment tools to make sentencing and parole decisions, and public health officials apply digital surveillance techniques to decide which city blocks to focus medical resources. Such programs are able to sift and sort a much larger set of data than their human counterparts, but they may also reproduce long-standing forms of structural inequality and colorblind racism.
And these default settings, once fashioned, take on a life of their own, projecting an allure of objectivity that makes it difficult to hold anyone accountable.48 Paradoxically, automation is often presented as a solution to human bias – a way to avoid the pitfalls of prejudicial thinking by making decisions on the basis of objective calculations and scores. So, to understand racist robots, we must focus less on their intended uses and more on their actions. Sociologist of technology Zeynep Tufekci describes algorithms as “computational agents who are not alive, but who act in the world.”49 In a different vein, philosopher Donna Haraway’s classic Simians, Cyborgs and Women narrates the blurred boundary between organisms and machines, describing how “myth and tool mutually constitute each other.”50 She describes technologies as “frozen moments” that allow us to observe otherwise “fluid social interactions” at work. These “formalizations” are also instruments that enforce meaning – including, I would add, racialized meanings – and thus help construct the social world.51 Biased bots and all their coded cousins could also help subvert the status quo by exposing and authenticating the existence of systemic inequality and thus by holding up a “black mirror” to society,52 challenging us humans to come to grips with our deeply held cultural and institutionalized biases.