Skip to main content

AI for all


I have always believed that it is us humans who have this incredible power when creating the artificial intelligence technology that is now impacting our everyday life including the quality of healthcare we receive, loans for housing, employment applications etc.

The question of how to make the results from machine learning algorithms fair has driven my independent research for the past few years. Statistical criteria, I argue, are insufficient in ensuring fairness and eliminating bias in machine learning algorithms. Instead, the solutions lie in the humans writing the code—and the ethical values they embed at each stage of the development process.

Here is the abstract of my research. Please send me email at aryavcdesai@gmail.com if you want the detailed thesis.

Abstract:

As AI, and more specifically, machine learning, grows to power all aspects of society, from autonomous vehicles to surveillance, it has the potential to reflect society’s worst prejudice and biases. While there exist many different statistical criteria of fairness that relate an algorithm’s predictions to its outcomes, it has been statistically shown that all of these criteria except calibration, which stipulates that the proportion of people who experience a given outcome is the same as the proportion predicted to experience it, violate a fair predictive algorithm. However, even strong calibration accurately represents the existing societal disparities of the data it is trained on, which is often the cause of biased results from predictive algorithms. Therefore, the solutions to eliminate bias in machine learning lie outside the algorithm: in the engineers developing the technology, the collection and vetting of data used to train it, and in the way in which the results of the technology are used to make decisions.