Forbes recently published an article on the complex definition of fairness in the context of AI. Find it here.
When considering how to define and implement fairness in a field as careful as artificial intelligence, many different frameworks pop up. One talk at a 2018 ACM conference even outlined 21 different definitions of fairness in this context.
One common theme when discussing fairness in the context of AI is in preventing such unfair biases from being created or reinforced in the outcome of the technology. Identifying the possible sources of bias is paramount to rooting it out and preventing unfairness in such important fields as hiring, loan lending, or policing, all of which have burgeoning AI applications.
At all stages of AI development, bias can intrude, but it begins with data collection. Training AI on data that either underrepresents certain groups (such as darker-skinned individuals being underrepresented in the databases used to train facial recognition software, and having poorer accuracy when the software is implemented as a result) or accurately reflects societal biases (such as language models trained on text from the internet, that inadvertently reflect the hate prevalent in certain corners of it). Finally, the objective given to machine learning models can also reflect bias. As an example, in a recent healthcare prioritization algorithm, the objective function was to prioritize patients who have spent the most on healthcare in the past, supposedly as a proxy for their past sickness. However, unequal access to care nationwide prevents this from being an accurate proxy for sickness, accidentally prioritizing healthier white patients over sicker Black patients. In that case, eventually, choosing a different indicator for health, number of chronic medical conditions, ended up providing positive change and allowing more Black patients to receive help as a result.
Incorporating diversity in the development and testing of AI can help root out these different biases. Diversity and fairness go hand in hand, especially in the field of AI, in helping identify blind spots and biases and cover these bases, rooting out what might go undetected as unfair. As put by Morgan Gregory, author of the Forbes article, “fairness in AI starts and ends with people.”