Inside the EU's New AI Act
EUobserver recently published an article about the EU’s new AI Act. Find the full article here.
The EU has been at the forefront of developing legislation to address the risks of AI; artificial intelligence having unintended adverse effects is an undeniably global issue. For example, in 2019, paralleling similar events in the United States, self learning algorithms to determine risk of fraud used in the Netherlands targeted minority groups, resulting in a political scandal over the tech.
Proposals to regulate AI, including as many as 3000 different amendments, are in the Parliament, expected to be greenlit next December. This would then be followed by the EU’s AI Act. The introduction of such proposals brings to light a debate that is prevalent throughout the process of making technology more ethical — the difficult balance to strike between using policy to foster innovation and to ensure safety and fairness in its use, that is protecting citizen’s freedoms and rights.
With exceptions for extreme cases like identifying kidnapping victims or preventing terrorist attacks, the European Commission is proposing the ban of various applications of AI, including storing biometric identification data for law enforcement. In such a nascent field, defining areas of potential harm to fundamental rights can be ambiguous, such as how to define areas of risk in new technology (a definition that some lawmakers see fit to be expanded), as well as where the act can be governed and what reasonable non-compliance fines would be. That being said, they are focused on “high risk” tech that shows the potential to violate basic human rights, something the UN has majorly criticized AI technology built to forecast and profile for. Potential (and, in the past, proven) human rights violations by AI can include infringements on the right to privacy, not being subject to arbitrary arrest, trial, employment, education, and public services. AI technology that scores credit or students’ exams, for example, can potentially target minority groups and violate these fundamental rights. Companies whose technology falls into this category would have to face EU accountability and oversight or face fines of up to 30 million euros.
However, some governments are still arguing for the use of AI technology by law enforcement and migrant authorities, which poses the clear risk of unfairly surveilling and policing minority groups. On the other end of the spectrum, other critics argue for a complete halt to the use of AI until its risks are completely addressed.
Transparency and accountability are the major themes of this new proposal, with many calling for careful consideration of the people impacted by any given AI system (with remedies to be issued if citizens are unfairly impacted by a violation of the AI Act), as well as communication and explanation to the public.