A recent Nature survey asked researchers their thoughts on ethical issues surrounding facial recognition studies. The survey follows much discourse in the facial recognition research community regarding the morality of both certain data sets used to train the AI and the possible unethical implications of some studies. Most notably, researchers requested the publisher Wiley to pull a recent study that “trained algorithms to distinguish faces of Uyghur people.”
The Chinese government, drawing much criticism for its actions, has surveilled and placed the Uyghur population in “re-education” (in reality, internment) camps. The fact that the government used facial recognition software to surveil the Uyghur population led many to consider the training of algorithms to distinguish Uyghur faces immoral. Beyond this potentially troubling connection, the collection of data sets of images used to train facial recognition algorithms without consent of those photographed proves to be another concern. Examples of this data collection without consent include live webcams and camera footage from public locations and image sharing websites. The applications of these images are unknown and can range from commercial surveillance products to military projects. Efforts have been made to take down studies that used data sets that didn’t receive consent from those depicted, but to varying degrees of success. The usage of image collection databases such as MSCeleb prevails in some studies. Furthermore, regarding the collection of info used to train the algorithm that recognizes Uyghur people, information points to the students photographed not being given sufficient information about the applications of the data to properly inform their decision – another case of data being used without (full) consent from those involved. These potentially harmful implications of software can also be seen in a recent development of software that can supposedly predict if someone is likely to become a criminal. It received much pushback and was eventually said to be based on questionable science.
The work was never published, but consensus was that similar algorithms have worsened biases in the criminal justice system. There exist different proposals for how to maintain ethics in facial recognition studies, a few being ethics boards to review work, detachment from questionable technology firms (for example, those linked to mass surveillance), greater ethical consideration of a study before conduction, and statements at the end of papers about the possible “societal impacts and ethical concerns.” The results of the survey conducted by Nature showed disagreement among researchers as to the ethics of various aspects of the facial recognition research process. Some put more emphasis on full, informed consent from those in a data set, while others did not. More researchers generally agreed that facial recognition research on vulnerable populations was potentially immoral, although there was a minority that argued that the applications of the software, not the research, should be held under greater scrutiny. There was no clear consensus on the ethical precautions that need to be taken before research is conducted. The ethics of facial recognition research continue to be discussed as awareness of the issue becomes more prevalent.