European Union AI Act

"Thou shalt not make a machine in the likeness of a human mind." - Frank Herbert, Dune

How do you govern a technology that is ill-defined, rapidly changing, maybe capable of destroying all of humanity, and maybe capable of greatly aiding it? Do you ban it, ignoring possible benefits, or do you let it develop without guardrails, disregarding the damage along the way?

Last week, the European Union took a major step towards answering these questions with the EU AI Act. The act is designed to protect the rights of individuals and ensure that AI is used in a way that is safe and respects human rights. In this post, I’ll look at the key ideas of this act and argue that it is a good base framework for future governance.

First, what even is AI, according to the EU?

'AI system' is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

Nearly every word in this definition could be (and has been and will be) debated, but I’d like to focus on the scope. The act is designed to be future-proof, and the definition encompasses a wide range of technologies. An earlier definition relied on an explicit list of techniques from “machine learning approaches” to “Bayesian estimation,” scaring some statisticians who learned suddenly that they were doing AI. We have no idea what techniques AI will use in the future or what weird names we’ll use for them, so having an explicit list isn’t a great idea. The final definition is instead open, and maybe even too open – does a database lookup that generates different outputs based on inputs qualify as AI now? Well, if the private sector and news hype cycles are to be believed, everything is now AI, even things that are very clearly not AI. A broad legal definition is better at this stage, as a wider base will hold up more legislation that can refine it.

The act is much more explicit when it comes to applications of AI. The act is risk-based, meaning that the level of regulation will depend on the level of risk associated with the technology. Risk is defined as “the combination of the probability of an occurrence of harm and the severity of that harm,” and is broken down into four categories: Unacceptable, High, Low, and Minimal Risk. Unacceptable risk systems are prohibited, high risk systems are heavily regulated, low risk systems mostly have transparency requirements, and minimal risk systems are pretty much unregulated.

Banning applications is pretty heavy handed and is reserved for only the most risky of cases. Examples include social scoring systems, assessing an individual’s criminal risk, facial recognition, biometric categorization like racial profiling, and emotion recognition in workplaces or educational institutions. These aren’t abstract examples; these are all technologies that have been developed or used. I guess I’ll have to look for alternatives to video monitoring of my students’ faces or making them wear brain-wave trackers to know if they are paying attention.

High-risk AI systems are the second category and aren’t banned but are heavily regulated. These are systems that concern safety, are used in critical application, or profile individuals. Providers of these systems must prove that they are managing the high risk by providing documentation, designing systems to allow for human oversight, and being transparent about the system and its data. An example of this sort of system is automated hiring software, which is a growing market fraught with problems of bias and a lack of transparency. Another example is medical devices, which face both incredible potential gains from AI and many potential risks. These high-risk applications are the most likely to be the subject of future litigation, and the act provides a framework for this litigation by setting a standard of compliance for these systems.

One point which was heavily debated is the regulation of generative AI like ChatGPT. These technologies have been split into two tiers. The first tier covers all general-purpose models, except those used only in research or published under an open-source license. These will be subject to transparency requirements, including detailing their training methodologies and energy consumption, and must show that they respect copyright laws. The copyright point has been the subject of much ongoing litigation in the US and will prove problematic for companies like OpenAI and Stable Diffusion. The second and stricter tier will cover general-purpose models deemed to have “high-impact capabilities,” which pose a higher “systemic risk”, such as massive language models like ChatGPT. While preliminary, the handling of generative AI is impressively done, considering how quickly the technology has advanced during the development of this act.

The act clearly follows, and is designed to work with, the General Data Protection Regulation (GDPR), a law designed to protect the privacy and personal data of individuals in the EU, adopted in 2016 and effective starting 2018. Since 2018, the GDPR has been refined and interpreted through court cases and the application of fines. For example, the protection of data transferred between the EU and the US has been extensively debated. The GDPR as written in 2016 was not the definitive stance on data privacy in the EU, but it was a necessary first step from which European agencies could act to protect individuals’ privacy and data. Following it, many other countries have adopted or are considering similar laws.

The European AI act is just the beginning of what will likely be a long process of refining and interpreting the rules governing AI in the EU. EU member states now need to sign it, and even then it doesn’t go into effect immediately. As with the GDPR, the AI Act will be discussed and debated through court cases and the application of the regulation. Legislation is slow, while digital technology tends to move fast and break things (still). Given the breakneck pace of progress in AI, it is necessary that this first step was taken now. Securing development in AI so that it is safe and respects human rights is a difficult task, but this act is a momentous step in the right direction.

Read More

Dennis Wilson