Kristian Lum and Rumman Chowdhury recently published an article for MIT Technology Review on the definition of the word “algorithm” and its implications in policy governing today’s AI. Find it here.
As AI grows to permeate nearly all aspects of modern society, regulatory policies are slowly taking shape to ensure that such technology is ethical and fair to those subject to it. However, debate over the semantics of what is and isn’t an “algorithm” may be hindering meaningful policy changes. The word “algorithm,” Lum and Chowdhury explain, puts up a wall between the often human-led decisions behind a complex decision-making system and the accountability required by many of these systems. As an example, in December, Stanford Medical Center misallocated COVID-19 vaccines, giving priority to administrators over doctors at the front line. The administration later turned the blame to a distribution “algorithm,” the wording implying an empirical, perhaps even AI-based system that humans would struggle to understand, hence could not be blamed for. In reality, the decision-making system that was used to decide vaccine distribution was a medical algorithm, which is in contrast, a simple formula designed by a human committee — without any AI present. A team of humans, including ethicists, determined what operations the system should use to determine vaccine eligibility, accepting inputs like age and department.
An algorithm is defined in machine learning and statistics as “the set of instructions a computer executes to learn from data,” according to Lum and Chowdhury, with “the resulting structured information” usually being called a model. Such an algorithm may have different “weights,” constituting information learned by the algorithm from the data, as well as varying complexity. The impact of an algorithm depends on the data they are used on (ie, which doctors to distribute the COVID-19 vaccine to), and the situation or field in which the model is used. The broad definition of algorithm laid out here qualifies Stanford’s vaccine system as an algorithm, despite only humans being involved in its development.
The article argues for a greater focus from policymakers on the impact of these systems, rather than using excessively broad or excessively rigid definitions to pigeonhole them. The language in such policies often contain complex definitions of what algorithms are that have potential to contradict one another. Hence, Lum and Chowdhury argue, the focus should be on the outcome of “algorithms”; while “impact” itself is another vague word, it is more likely to serve accountability to those behind systems causing harm than even more general terms like “algorithm” and “model” being used to develop policy.