Skip to main content

Who Is Making Sure the A.I. Machines Aren’t Racist?

The New York Times recently published an article on Google’s recent firing of two AI ethics experts and its grave implications. Find it here.

After completing her Ph.D in 2018., Dr. Timnit Gebru was hired by Google for the important work she was doing about making sure AI was not propagating the biases of the predominantly white male demographic developing it. At Google, she worked with Dr. Margaret Mitchell to focus on developing ethical AI at the company.

Three years later, Dr. Gebru and Dr. Mitchell were dismissed from Google. The company made efforts to suppress important research conducted by Dr. Gebru; when she submitted a paper exposing Google’s bias against women and people of color to an academic conference, the company demanded that she retract the paper or remove the names of Google employees from it. Dr. Gebru said that unless she received the reasoning behind the company’s order for her to remove the paper, she would resign, and they accepted her offer of resignation, later firing Dr. Mitchell for searching through her own email to defend Dr. Gebru. 

Google’s firing of the two AI experts recontextualizes and makes more grave past blunders committed by AI that evidence clear bias towards women and people of color. First of these is Google Photo’s own mismatching of 80 photos of a Black man at a concert into a folder labeled “gorillas.” The company’s solution to the clear racism present in their technology was simply to remove the “gorillas” categorization altogether, but the mistake remains telling of a clear problem in AI technology.

For example, at New York tech firm Clarifai, Deborah Raji discovered that because the company’s content moderation system (built to detect pornographic images) was trained on stock photos that mainly featured white men, the software was more likely to falsely flag Black faces as pornographic.

A past collaborator of Dr. Gebru, Joy Buolamwini, a Black graduate student  discovered that a facial recognition system she was testing was unable to correctly detect her face, but when she held a white mask up to her face, the software detected it. This led the way to her testing other similar technologies from Microsoft, IBM, and Amazon, finding that the technology which is often used in policing practices, was proven to be less accurate for women and people of color. Her findings had some implications, as Amazon agreed to stop selling their Rekognition technology to police, but is indicative of a larger problem in the industry.

The firing of Dr. Gebru and Dr. Mitchell looms large over the AI community, reflecting a refusal to acknowledge a clear bias problem on Google’s part. Google’s dismissal of the two stands with other hollow corporate efforts from companies involving mission statements committed to change not seeing any actual change.


Comments are closed.