Rewriting Dystopian Plotlines: It’s All in the Algorithm
August 2021
The cases read like dystopian novel plotlines: a man is randomly arrested for robbing a watch store he has just visited. Despite a woman’s qualifications, her resumé for a tech job is automatically deprioritized. Another woman finds herself marked as a credit risk and denied a bank loan on no justifiable basis.
These aren’t examples from fiction—they’re the experiences of women and people of color who have suffered from the findings of fundamentally flawed artificial intelligence. The wrongfully arrested watch store victim was Robert Julian-Borchak Williams, a Black man who was misidentified by a security camera and facial recognition algorithms used by the police in January 2020. The company deprioritizing the resumés of female applicants was Amazon. In 2018, the company discovered that the recruiting engine they had been using for hiring employees had “taught itself that male candidates were preferable,” according to Jeffrey Dastin for Reuters. It disadvantaged resumés that included two women’s colleges—and even the word “women’s.” And the algorithm wrongfully denying women access to loans? That was IBM’s Watson OpenScale, which found that if its AI systems were only trained on historical data, women were more likely to be marked as credit risks and denied loans.
At the root of these errors is a lack of diversity in the training sets used to train algorithms. Artificial intelligence learns to make decisions based on the data it’s fed, so an algorithm exposed to only a narrow selection of data cannot make accurate and equitable decisions. The software that wrongfully detected Mr. Williams’s face is representative of a larger trend. A 2019 NIST study found that false positive rates for facial matching are much higher for Asian and Black faces than for Caucasian faces—likely because of a lack of nonwhite faces in the training databases. Furthermore, predictive policing software, which aims to prevent crimes, is trained by arrest data that features an overrepresentation of Black people. That overrepresentation, Michael Totty writes in The Wall Street Journal, is the product of “discriminatory policing practices.” Anti-Black discrimination creeps into the algorithms that ultimately reinforce the arrest patterns.
A similar pattern is at play for women applying to STEM jobs. The long history of gender discrimination means women are underrepresented in the training data. Informed by biased data, the candidate-ranking algorithms deprioritize female candidates and continue to exclude qualified women from STEM jobs. Unsurprisingly, algorithms trained to calculate credit risk using only historical data also perpetuate historical gender biases.
However, it’s possible to avoid perpetuating these biases. The same NIST study found that the substantial false positive difference between Asian and Caucasian faces was not present in algorithms developed in Asian countries. With more Asian representation in the image sets, the software was more accurate. As Patrick Grother, one of the study’s authors, notes, “more diverse training data may produce more equitable outcomes.” Incorporating more perspectives can eliminate bias, create fairer algorithms, and potentially unlock more equitable outcomes. And if we can rewrite the code, we can rewrite those dystopian plotlines, too.
Sources:
Dastin, Jeffrey. “Amazon scraps secret AI recruiting tool that showed bias against women.” Reuters, 10 Oct. 2018.
Hill, Kashmir. “Wrongfully Accused by an Algorithm.” The New York Times, 24 June 2020.
Totty, Michael. “How to Make Artificial Intelligence Less Biased.” The Wall Street Journal, 3 Nov. 2020.
“NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software.” National Institute of Standards and Technology, 19 Dec. 2019.