Data is required for the refinement of most cutting-edge technology, and it will only become more important in future as we develop more sophisticated AI and ML models, fueled by richer, higher quality data sets.
However, there are strict regulations around how data can be used, particularly within the EU. The EU’s General Data Protection Regulation (GDPR) mandates that businesses need consent to store subject data in order to preserve the privacy of its citizens online and offline.
Get a weekly dose of entrepreneurial insights from TNW’s founder Boris
These regulations affect AI companies the world over, however, because they restrict how data is moved out of the bloc – to servers in the US, say. Rely on cloud services in your tech stack? That means the rules probably affect you too.
Observing these regulations is not straightforward, though. EU courts themselves have repeatedly called into question the legality of the framework for EU-US data transfer that supposedly abides by GDPR, and a new solution is still some way off. So it’s essential all AI businesses, anywhere in the world, follow the situation closely.
Privacy Shield in tatters
The key problem for international software companies is data transfer from the EU to the US and back. The US is the EU’s second biggest trading partner, and many of the world’s biggest tech companies are based there, so using their services with data centres based in America often inevitably requires moving data across borders.
That was governed, for a time, by the EU-US Privacy Shield framework agreed between the two powers. Then in 2020, this was invalidated by the European Court of Justice itself, on the grounds that US national security laws risked breaching EU citizens’ data privacy.
Since the ECJ delivered this verdict, data transfers have been allowed using ‘Standard Contractual Clauses’, but this has not proven much more straightforward. Amazon and Meta have been involved in legal disputes over how their algorithms work within the confines of EU law. In fact, Meta has even threatened to pull itself out of Europe entirely to avoid facing legal action against the EU. The Austrian Data Protection Authority meanwhile has ruled that even using Google Analytics to monitor your website’s traffic numbers is illegal under GDPR, setting a precedent for other EU countries to follow.
This leaves companies in a bind. Even if you don’t want to do business in Europe, that doesn’t mean you’ll be able to avoid the hand of EU law. Let’s say a business uses a data set that contains one respondent from Spain to develop a new product; the whole production process could unravel if said business doesn’t adhere to GDPR. But how do you do that?
Help is at hand – eventually
Fortunately, businesses may not be stuck in this legal quagmire forever. Hope has arrived in the form of the recently announced Trans-Atlantic Data Privacy Framework, the result of more than a year of behind-the-scenes negotiating and diplomacy but still some way from adoption
According to the joint statement released at the time, it will “provide a durable basis for trans-Atlantic data flows, which are critical to protecting citizens’ rights and enabling trans-Atlantic commerce in all sectors of the economy, including for small and medium enterprises.”
Until everything is ironed out, businesses will be left without clarity for many more months, making it harder for them to plan for the future ahead, so seek advice, and check every process that might present a hazard. What data sources do you use? What cloud services do you rely upon? Where are all of your customers based? While this grey area of international law remains, it’s essential to be watertight on the answers to all these questions.
Looming AI legislation
And there’s an even more compelling reason for AI businesses to get on top of EU law now…there’s more of it to come. Following the regulatory framework proposal that was laid out last year, the EU has said that new AI regulation (entirely separate to GDPR) “could enter into force in the second half of 2022 in a transitional period.”
The proposed regulation would adopt a risk-based approach. In some high-risk cases – such as services underpinning critical infrastructure – AI systems will be subject to strict obligations before they can be used in the EU. These could include human oversight measures to provide transparency, risk assessment protocols and detailed compliance procedures. Flouting them would incur substantial fines.
The decisions that AI-powered systems can make, particularly in the healthcare industry, have the capacity to change lives so it’s crucial that businesses within said category stay vigilant, ensure that processes are transparent and focus on where data is stored or transferred.
In the meantime, businesses must be proactive in their approach to data to avoid falling foul of EU laws. Every business wants to grow and at some point that means working in, or with, the EU and its vast population. And none want to have to foot the bill for a fine that could be avoided, especially when you would rather focus on innovating and expanding as an organisation. It’s absolutely imperative that businesses tighten their operations, double check that their data sets sit in line with EU laws and get ready for further regulation.