Skip to main content

How AI-powered tech landed man in jail with scant evidence

In late August, AP News published a piece detailing Michael William’s wrongful arrest at the hand of machine learning-based gunshot detection technology, as well as some history of — and many major flaws with — the software. Find it here

In May 2020, during the time of protests in light of George Floyd’s murder, Michael Williams was simply driving back from a gas station when Safarian Herring, someone he recognized from his neighborhood, asked for a ride. While the two were driving, someone from a passing car shot Herring, and Williams drove Herring to the hospital where he later died. Three months later in August, Williams was interrogated and ultimately sent to jail for 11 months for first degree murder, as the gunshot detection technology ShotSpotter had identified a gunshot nearby where Williams was seen in the car with another person. His case was ultimately dismissed on account of insufficient evidence.

William’s case is reflective of many larger issues of biases with ShotSpotter. First and foremost, the accuracy of the technology is in question, as AP’s testing found the software to often misidenitfy sounds, and ShotSpotter has not made its software available for any review or scientific testing. The actual training used to develop the algorithm is based on “real-world feedback loops from humans,” says their CEO, Bob Clark. This can lead to potentially incorrect data being submitted by officers to train the algorithm. Furthermore, ShotSpotter employees often manually modify the location of the sound detected, sometimes at the behest of police. This is a clear source for human bias, and even in Williams’ case, the initial sound identified — which was originally marked as a firecracker before being switched to a gunshot — was a mile away from the location it was switched to, where Williams was seen. Additionally, ShotSpotter technology is deployed in areas with high crime, which are disproportionately Black and Latino. This raises a similar issue as with other predictive policing software (some of which ShotSpotter has acquired and uses itself, like the company HunchLab), in that there is a risk of causing feedback loops that lead to these communities having more arrests and subsequently more policing as a result. 

Real life human impact is a dangerous potential consequence of the use of technology like ShotSpotter, as can be seen in Williams’ case: his experience in prison for 11 months, in which he had COVID-19 twice, almost drove him to take his own life, and mistakes such as those that led to his arrest should not be taken lightly by the companies developing machine learning algorithms, especially in policing. They can put human lives in the hands of, in the case of ShotSpotter, unproven, potentially unreliable black box systems. ShotSpotter, specifically, has not been proven to help substantially reduce crime or increase arrests in the areas in which it is used, begging the question of its necessity in the first place. The gung-ho attitude in support of gunshot detection systems from the Biden administration as well as local governments across the country, such as in Chicago, where Williams’ case unfolded, will only increase the use of this technology and its related misfires and make local push backs more important: several cities including Charlotte, NC, San Antonio, TX, and Fresno, CA have terminated their contracts with ShotSpotter as a result of false alerts or a lack of alerts. 

ShotSpotter is part of the increasing role of artificial intelligence in policing, and the real human dangers that come with its use that have been seen beyond just this particular gunshot detection software.

Comments are closed.