
Fixing algorithmic bias grows more necessary as companies face losses because of AI bias
January 2022
As more leading tech companies develop AI technology used in fields ranging from policing to healthcare, the biases detected in their products are not only leading to real-world consequences for those who use — and are discriminated against — by the tech, but losses for companies as well.
Revenue is the first of these potential losses, as biased hiring of employees and customers selected through the use of biased AI (for example, who is lent money by a bank) can limit the potential revenue of these companies. However, the more pressing and immediate losses these companies face is in legal fees, as growing vigilance of discrimination in AI has led to companies like Uber facing fines over their biased technology.
This legal risk has led to companies like Twitter introducing bias bounty, calling on the public to point out and remove bias in their code for a cash reward that is likely much less than the potentially giant litigation fees they may incur by the governmental bodies that regulate them.
However, it is difficult to define regulation for such a broad field, as there is a need for careful definition of what broad concepts like bias and fairness mean in the context of AI. Developing regulation is in the works, though, and the UK, as an example, has begun to lay out legislation and legal guidelines for AI used in their country.
Beyond legislation and bounties that regulate this technology once it is “in the wild” and impacting real people on a day-to-day basis, as important is ethical consideration from the start of the development of these algorithms, which can take shape in various forms. The datasets used to train algorithms often lack diversity, so greater inclusivity in these datasets is necessary to evade this bias in the future.
Diversity in engineers is another major necessity (and current absence), as a diverse set of people can prevent certain groups bringing their own biases to the table, as well as spot biases their co-developers cannot. Additionally, the algorithms that are often rushed to be put out to market are not thoroughly tested enough, and as they are developed, more testing against different demographics and “sanity checks” to weed out and account for errors are crucial as well. This testing cannot end once an AI software is released, and as algorithmic bias often reinforces itself, careful testing must continue once an algorithm is deployed.
While bias bounties and bias detection systems like those in use at IBM and Twitter mark a step in the right direction for greater ethical consideration of technology, it has proven harder for smaller firms with lesser resources to be as careful. Such testing is often expensive and happens more at larger companies, but software from companies of all sizes remains in use by forces as large and impactful as government agencies, making regulation of this tech just as necessary.