The question of how to ensure fairness in artificial intelligence technology has driven my independent research for the past few years.
Here is the abstract of my research. Please email me at email@example.com for the full thesis.
As AI, and more specifically, machine learning, grows to power all aspects of society, from autonomous vehicles to surveillance, it has the potential to reflect society’s worst prejudice and biases. While there exist many different statistical criteria of fairness that relate an algorithm’s predictions to its outcomes, it has been statistically shown that all of these criteria except calibration, which stipulates that the proportion of people who experience a given outcome is the same as the proportion predicted to experience it, violate a fair predictive algorithm. However, even strong calibration accurately represents the existing societal disparities of the data it is trained on, which is often the cause of biased results from predictive algorithms. Therefore, the solutions to eliminate bias in machine learning lie outside the algorithm: in the engineers developing the technology, the collection and vetting of data used to train it, and in the way in which the results of the technology are used to make decisions.