Skip to main content

How to Make Artificial Intelligence Less Biased

The Wall Street Journal published an article about the various approaches that have been taken to eliminate bias in artificial intelligence. Find it here.

This bias has taken various forms: between spotty facial identification for women of color, gender biased credit card limits, greater inaccuracy at crime prediction for Black people, and racial and gender bias in health risk software and advertising for jobs, these biases have been invasive in many aspects of modern technology. The larger scale, final decision focused AI has tended to sometimes display prejudice through unequal distribution of errors towards certain groups. Efforts are starting to be made by both academic researchers and software developers to combat these biases, including identification of when unfair, biased decisions are made. Before looking at these efforts that are currently being taken, it is also important to consider the issue of balance at play – a general accuracy may have to be in part lost to reduce the errors associated with racial bias. Nevertheless, some of these efforts are detailed here.

Firstly, to identify bias, new software is being developed from IBM, Google, and Microsoft to audit data models used to train the AI algorithms. To remove bias in the data, the AI model needs to be developed and trained differently. This bias is sometimes rooted in underrepresentation for certain groups in the data, making the results (for example, poor facial recognition for people with darker-skinned faces due to a possible lack of Black faces in the database) less accurate. Larger, more diverse training sets for the AI seem to be the most effective way to currently solve the issue. Aside from broadening the training data sets, other strategies of changing algorithms to increase fairness remain possibilities. Certain factors, such as zip codes, serve as implicit substitutes for race and gender when informing how AI algorithms make their decisions. Modifying these factors, such as the fact that when trained only on historical data, AI marks women as higher credit risks than men for no real reason, can help make decisions fairer. In the situation of gender bias in loans, changing the importance of certain variables or just ignoring them when making the final decisions served to balance the fairness and avoid the bias imposed by the AI. In other cases, new algorithms are developed to literally oppose the historical bias, as is the case with Zest AI, which uses a bias-seeking algorithm in combination with historical loan data to reduce the overall bias. The scale of how strongly historical bias can reflect current algorithm results can be seen in both gender and race: in addition to historically lower pay for women being unfortunately reflected in credit score bias, the unfair obstacles that Black and Hispanic people have faced, often influencing their credit history, then makes them less likely to receive loans. However, instead of altogether eliminating this factor, to preserve accuracy, its importance is simply lowered. This relates to the issue of balance brought up earlier – finding the middle ground between accuracy and fairness in credit models, and generally AI algorithms across all purposes.

The long-term goal is to achieve models that are both fair and accurate without having to lower the importance of certain factors. Finally, if neither the training data nor the model can be changed, adjusting results prove another possibility. Both in terms of race and gender, a lack of representation for a group in a certain category can shove results for this group further down the list of search results: for example, LinkedIn’s Recruiter tool has inadvertently put women further down the list of potential candidates for job searches in certain fields. This has recently been fixed by LinkedIn, but similar implications in different categories, such as Pinterest searches, still exist. However, there also remain other biases that have not yet been fixed with technology changes. Predictive policing, for one, features overrepresentation of Black people in arrest data due to a larger discrimination faced by them from the police. This discrimination makes them more likely to be arrested than white people, reinforced by the algorithms used for predictive policing. The definition of fairness is something else that is subject to debate in the AI development community, ie. both what measures need to be taken to restore fairness and whether providing equal opportunities to races through extra measures or letting AI be race-blind constitutes fairness.

The necessity of AI use in certain situations is also up for question from some, considering the substantial bias it has imposed on certain groups. The accuracy, need for, and strengths of AI are all issues that continue to be both debated and considered for changes to the technology going forward.


Comments are closed.