Whereas the CEO of MYA Systems describes “a very human-like interaction” in the algorithmic vetting process, those who are on the hunt for jobs recount a much different experience. According to one report, applicants are frustrated not only by the lack of human contact, but also because they have no idea how they are evaluated and why they are repeatedly rejected. One job seeker described questioning every small movement and micro-expression and feeling a heightened sense of worthlessness because “the company couldn’t even assign a person for a few minutes. The whole thing is becoming less human.”14
Less human in terms of interaction, yes, and still discriminatory. One headline put it bluntly: “Your Next Interview Could Be with a Racist Robot.” As Stanford University computer scientist Timnit Gebru warns, “[i]t’s really dangerous to replace thousands of human [perspectives] with one or two algorithms.” This sentiment is echoed by Princeton University computer science professor Arvind Narayanan, who tweeted this in response to AI-power employment decisions:
Human decision makers might be biased, but at least there’s a *diversity* of biases. Imagine a future where every employer uses automated resume screening algorithms that all use the same heuristics, and job seekers who do not pass those checks get rejected everywhere.
In October 2018, Amazon scrapped an AI recruitment tool when it realized that the algorithm was discriminating against women. The system ranked applicants on a score of 1 to 5; it was built using primarily the resumes of men over a ten-year period and downgraded applications that listed women’s colleges or terms such as “women’s chess club.” But even after programmers edited the algorithm to make it remain “gender neutral” to these obvious words, Amazon worried that “the machines would devise other ways of sorting candidates that proved discriminatory.”15 They rightly understood that neutrality is no safeguard against discriminatory design. In fact, given tech industry demographics, the training data were likely much more imbalanced by race than by gender, so it is probable that the AI’s racial bias was even stronger than the reported gender bias.