The European Commission is set to unveil a new set of regulations for artificial intelligence products. While some AI tech would be outright banned, other potentially harmful systems would be forced through a vetting process before developers could release them to the general public.
The proposed legislation, per a leak obtained by Politico’s Melissa Heikkila, would ban systems it deems as “contravening the Union values or violating fundamental rights.”
The regulations, if passed, could limit the potential harm done by AI-powered systems involved in “high-risk” areas of operation such as facial recognition, and social credit systems.
Per an EU statement:
This proposal will aim to safeguard fundamental EU values and rights and user safety by obliging high-risk AI systems to meet mandatory requirements related to their trustworthiness. For example, ensuring there is human oversight, and clear information on the capabilities and limitations of AI.
The commission’s anticipated legislation comes after years of research internally and with third-party groups, including a 2019 white paper detailing the EU’s ethical guidelines for responsible AI.
It’s unclear at this time exactly when such legislation would pass, the EU’s only given it a “2021” time frame.
Also unclear: exactly what this will mean for European artificial intelligence startups and research teams. It’ll be interesting to see exactly how development bans will play out, especially considering no such regulation exists in the US, China, or Russia.
The regulation is clearly aimed at big tech companies and medium-sized AI startups that specialize in controversial AI tech such as facial recognition. But, even with the leaked proposal, there’s still little in the way of information as to how the EU plans to enforce these regulations or exactly how systems will be vetted.