The New York Times recently published a piece describing the troubling findings of Exposing.AI that indicate the extent to which personal photos have been used to develop facial recognition systems. Find the article here.
Exposing.AI allows users to search the various sets of images used to train facial recognition softwares for their own personal images. It does this by matching images from the Flickr photo-sharing database to the various photo databases used to train AI. In certain cases, this use of personal images to teach AI technology to recognize faces can be illegal, and it certainly feels like a violation of privacy to many. Users uploaded their images to Flickr under a Creative Cloud license, which may have been ignored as then-Flickr owner Yahoo used these images in data training sets for computer vision. The implications of the undisclosed use of these images goes beyond training for computer vision; a past Flickr user found out his photos had been used for both surveillance systems in the US and other countries and to track the Uighur population in China.
The collection of personal images is not unique to Flickr; different sources for data training sets include other photo-sharing services, public cameras in places such as colleges, dating sites, and social networks. One such data training set was called MegaFace, which didn’t ask for the consent of those photographed when collecting images. The use of MegaFace, which was uploaded to the internet for public download, scaled beyond its original purpose, an academic competition. The data set was downloaded more than 6000 times by various groups such as defense contractors, intelligence agencies, and tech and surveillance companies. One of these surveillance companies was Megvii, which the Chinese government used to track the Uighur population. While MegaFace was taken down in May 2020, copies likely exist and can be used for further research, showing the potentially grave consequences of the use of personal photos in training sets without consent.