Skip to main content

Clearview AI Under Fire from UK

Clearview AI, the controversial facial recognition company that trains its algorithm with some 20 billion photos scraped from the internet (including websites such as Facebook and Instagram), has been fined $9.4 million by the UK along with an order to delete all photos of UK residents from its database. The potential of Clearview AI is unfathomable, as it would provide the ability to search for faces like a Google search, and among the many criticisms levied at the company, not only relies on a tool historically found to be racially discriminatory in its accuracy, but in its mere existence creates the risk of both drastic security breaches and overt government surveillance.

The primary grounds on which the UK (among other countries, including Australia, France, and Italy who have attempted similar orders) is fining Clearview is for its use of photos of UK residents without their consent. However, it seems unlikely that Clearview will face any true action from the UK or will need to comply with their orders because of a few key loopholes. While there are certainly photos of UK residents used to train their algorithm, it is technically impossible to determine the residency of the people in those photos, says Hoan Ton-That, CEO of Clearview. Moreover, as a US company, Clearview is not subject to EU jurisdiction, nor are they actively doing business in the UK, although UK law enforcement agencies have used Clearview tech in the past. 

In the US, it’s a slightly different story. Earlier this month, a lawsuit from the ACLU saw Clearview prohibited from selling its facial recognition database to most private companies across the US (with the exception of banks and financial institutions), limiting its sale to government agencies and police departments. While this still holds its own perils, it marks a step in the right direction. The Illinois lawsuit achieved success by pointing out Clearview’s violation of Illinois law that prevents private companies from using identifiers of citizens (like the maps of their faces in use by Clearview) without consent. Clearview is now required to ask consent of all Illinois citizens before use of their data, and the use of Clearview’s tech in Illinois law enforcement agencies is also now prohibited as a result of the lawsuit.

Both the Illinois lawsuit and the UK’s most recent order are reflective of the need for more federal legislation, at least within the US where no federal privacy policy is in place. The use of Clearview AI in law enforcement and government agencies holds potential to be dire, considering the real possibility of overt government surveillance, as well as the racial inaccuracy of such technology in the past (and the real, life-changing consequences innocent citizens have faced as a result of this). The EU is hoping to impose more strict AI legislation soon, but as of now, is only ordering Clearview to delete the data of UK residents. Whether Clearview will comply remains to be seen.


Comments are closed.