Protocol.com recently published an article detailing the resistance to consider ethical concerns among computer scientists in the field of computer vision. Find the article here.
This June, at the Computer Vision and Pattern Recognition Conference (CVPR), for the first time, researchers were “strongly encouraged” to include a section in their submissions about the potential negative societal impacts of their research, a mark of progress considering how little ethical consideration has been given to this crucial field in the past. Computer vision is a particularly far-reaching field in AI, with its applications ranging from the unlocking of phones to surveillance and policing to deepfake technology, and the historical bias and discrimination found in this technology makes greater scrutiny a necessary measure to be taken by these researchers.
However, not all of the researchers agree with this initiative, many feeling that it undermines the focus on academic freedom and independence in their research, which often aims to help people with technology or sometimes doesn’t have any obvious societal impact or application. The sentiment among some researchers both at the conference and as they enter the world of corporate AI is that ethical concerns aren’t their job to worry about.
While CVPR was happening in New Orleans, another conference focused on fairness, accountability, and transparency in systems such as AI was happening across the world in South Korea. The disparate conferences represent the divide in computer scientists trained only in their field advancing AI tech without real ethical consideration, and AI ethicists, whose important research is unfortunately somewhat constrained only to the ethics world. The melding of these two fields is essential as AI grows applicable to most fields in modern society and poses potential threats to real human beings.