On August 2, 2016, the creators of Beauty AI expressed dismay at the fact that “the robots did not like people with dark skin.” All 44 winners across the various age groups except six were White, and “only one finalist had visibly dark skin.”2 The contest used what was considered at the time the most advanced machine-learning technology available. Called “deep learning,” the software is trained to code beauty using pre-labeled images, then the images of contestants are judged against the algorithm’s embedded preferences.3 Beauty, in short, is in the trained eye of the algorithm.
As one report about the contest put it, “[t]he simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.”4 Columbia University professor Bernard Harcourt remarked: “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.” Beauty AI is a reminder, Harcourt notes, that humans are really doing the thinking, even when “we think it’s neutral and scientific.”5 And it is not just the human programmers’ preference for Whiteness that is encoded, but the combined preferences of all the humans whose data are studied by machines as they learn to judge beauty and, as it turns out, health.