Skip to main content

AI experts say research into algorithms that claim to predict criminality must end

The Verge recently published an article on the call to stop research into algorithms that aim to predict criminality. Find it here.

The Coalition for Critical Technology, comprising a group of AI researchers, sociologists, and data scientists, recently called for an end to new research into algorithms that supposedly predict criminality. Their open letter put into question the validity and objectivity of such a term as criminality in the first place — even outside of the AI application, the definition of what is and isn’t criminal is inherently influenced by (often racially charged) societal biases and prejudice. 

The algorithms in question that the coalition is opposed to are trained on facial scans and criminal statistics, each of which pose their own threat to the objectivity of such a technology, as neither are neutral indications of criminal activity. For centuries, physiognomy, the study of facial features, has been a racially biased field that veers into pseudoscience in its ostensible ability to predict a human being’s nature based on their appearance. Applying this to the rapidly growing field of AI holds clear potential for damaging technology and serves as a slippery slope into scientific racism — using supposedly empirical algorithms to propagate societal biases. Furthermore, the criminal statistics used to train these criminality algorithms are themselves biased as a result of unjust policing practices, much of which has been brought to the forefront in the last year with the death of George Floyd and police brutality nationwide. In light of this questioning of prejudiced policing practices nationwide, such a call to put an end to exploring potentially harmful facial recognition software with dangerous applications in policing is necessary.

The issue was brought to light because of a paper that was to be published by Springer, the largest publisher of academic books, that the coalition requested be pulled. Springer’s response said that the paper had already been rejected through the peer-review process, but nevertheless, research into such morally and scientifically dubious algorithms has long been popping up, like one 2016 paper from Shanghai Jiao Tong University claiming to predict criminality from facial features. While the paper was refuted by researchers from Google and Princeton, so long as such technology continues to be developed, the threat it poses will continue to exist. What good is the supposed accuracy of these algorithms if they are inherently based on biased data?

Comments are closed.