EU Proposing Rules To Drive AI Out Of Europe

The EU claims to be “striving for more,” but the EU Commission’s leaked AI framework promises Europe will be innovating less and less.


ai code blog.jpg

The European Commission’s motto for the next 5 years is “Europe that strives for more”, representing the ambition to be more competitive at a global level. Unfortunately, it seems that the EU has a good chance of being a global champion… against technology and innovation. I wonder if European citizens are aware of the high possibility that in 5 years (or sooner!) they’ll be deprived of access to many digital applications? Hello, 1970’s!

As usual in Brussels, Commission draft documents were leaked before the adoption of any major strategy or legislative proposal. Another secret that everybody knows. In this case, it was the Commission’s preparatory document regarding its long-expected strategy on AI, prematurely revealing the Commission hand. This attempt to create a framework for AI made in the EU can only be taken as an example of an extreme application of the precautionary principle. Utopian and naive, it’s an awkward approach to the practical implementation of a strict regulatory framework.

The leaked draft proposes some tough testing, conducted by… someone? It’s not clear who. They’ll be looking for “high risk” AI, pitching diverse, internationally curated datasets for smaller EU-biased data, then imposing the EU vision on the rest of the world. A world that is rapidly losing patience with the regulatory colonialism that Brussels is now championing. The likely outcome? The other kids will simply take their toys and go home.

I hope that someone is playing a joke on the “Brussels bubble” and that this is not the proposed framework for AI in Europe. Much of it comes across as a joke. For instance, in the text, there’s a phrase stating that the “horizontal framework should not be so excessively prescriptive that it would create a disproportionate burden, especially for SMEs.” This is followed by a proposal for a complicated regulatory framework and a type approval scheme which, clearly, doesn’t offer any startup or SME an incentive to do business in the EU. Whether it is “disproportionate” isn’t a problem if it’s heavy enough to crush any size company.

For the sake of analysis, let’s assume this is the strategy the Commission will soon formally propose. We know there are voices that have requested this kind of approach.

Here are the main flaws of the strategy intended to create “an ecosystem of trust”:

A special regulatory framework, over and above any other legislation, at the EU and Member state level (including GDPR), is proposed for “high-risk” AI applications.

The way to identify the high-risk applications is, at the least, confusing. Two criteria are proposed: a list of sectors (like healthcare, transport, etc.), combined with uses that pose significant risks. We can foresee many, many grey areas that will bring legal uncertainty. One option proposed to get through this is that developers would self-identify the level of risk of the applications they’re developing. Is it this feasible, dear devs? Does anyone want to put their hands up for “high risk”? Risk for whom? Risk of what, where why and how? Crossing the street is risky. The rest of the applications, classified as low risk, should remain under the relevant EU law, so that’s something we understand, however…

Next, the European Commission is proposing “an EU framework on data quality and traceability.”

One would assume that the goal for this framework would be to raise the standards found in the third-world, and anywhere else beyond our borders (looking at you, America). This implies that any AI applications and algorithms to be created, developed and utilized in the EU should be trained exclusively on datasets “that have been gathered and used according to EU requirements”. In other words, developers should ONLY use datasets from the EU! IF the systems developed outside the EU fail the certification assessment, then those systems should be re-trained in the EU, using European datasets.

This isn’t funny, it’s dangerous. EU datasets aren’t “unbiased” – they’re perfectly biased, like any dataset. They are filters for rejecting anything, anyone,’ and any thought originating from the 7 or 8 billion people that aren’t European.

Let’s look closer. Does the notion of “European datasets” mean exclusively data within the EU? Or would datasets from, say, Norway, Iceland, Switzerland or the UK be acceptable? How about data from the candidate countries to join the EU, like Serbia? Is it European enough? What’s the time limit when we should start considering that a dataset from the EU is compliant with this standard? It’s pretty likely that the current EU datasets would need to be excluded.

Any developer can tell you that European data is not unique, nor necessary, in order to develop a well functioning AI model. What should a developer do in those cases where there is no (or more likely too little), quality European data? Further, developers can tell you that for any model trained using third party datasets (which is common practice, by the way) it would be impossible to ensure that the data was gathered “in accordance with European rules and requirements”. The quality and diversity of the data used for training AI are essential for the quality of the output. Such systematic limitations in collecting and using data would make EU AI biased and ineffective. Who would risk the inevitable liability that would come from taking a well trained AI system, retraining it on a more limited set of data, and then having differing results from the “good” version and the “EU” version used as evidence of suboptimal performance should anything go wrong? The objectives to ensure diversity and fairness of AI systems would be clearly undermined. It’s easy to foresee the result of such an approach: many applications will be automatically ruled out of the EU or at best will be degraded.

But wait, there’s more!

According to the proposal, ex-ante and ex-post (“before” and “after”, for our nonlatin devs) conformity assessments should be carried out by a network of testing centers in the Member States. Sounds good, but how should this happen, exactly? It’s not clear. AI is embedded in a large variety of products that already fall under the rules and jurisdiction of many national market surveillance authorities. Who will be conducting the testing of AI systems, and how? Will the algorithms be tested separately from the products? The most popular apps push updates 6 times a year, and for many, updates come every two weeks. There’s no reason to believe AI software would diverge from this practice, meaning certification testing would need to occur in a couple of weeks and thousands of apps would be in the system at any point in time. How often will the testing be applied? Are we sure that there is enough expertise to sustain such a complex certification scheme? Do the Member States have enough experts to read tens of thousands of lines of code and unravel the mathematics inside them? Is the documentation enough? How would this certification system respect trade secrets and privacy laws? Who will audit these testing centers? Does testing imply that the product is safe to be put on the market? Which market? National markets or the entire EU Single Market (remember, digital products are marketed cross-border)? Should the developers or the producers (also not clear) obtain approval from each Member State? On top of that, there are ex-post ch
ecks and “deterrent fines”. Why would an entity divulge ALL it’s intellectual property, degrade its product by adding bias, and then subject itself to penalties rather than just skip targeting the EU altogether? How do we lock down the digital border so that EU citizens can’t just access foreign AI from their desktops and phones? What happens when US AI provides better results than EU AI? Is this what we want? The extraterritoriality dimension of the proposal is very clear, but it comes from a lack of understanding of the digital world. AI exists in a virtual space: if the data, the algorithm, and the processing exist outside the EU, and only the interface is inside the EU, what legal basis does the EU have for demanding access to such a system? The entire approach just seems problematic from a trade perspective and could raise serious problems for EU trade negotiators in the future.

My final questions:

  • Who will want to code and develop AI solutions in Europe?

  • Who will want to deploy their products in the EU?

  • And most importantly, who can do business in such a regulatory environment?!

I’m waiting for the official document to be published by the European Commission on February 19th. I am still hopeful an ambitious strategy that fosters innovation (not chilling it!) and allows the EU to reap the benefits of AI technologies will be presented.

Please, let’s not drive AI out of Europe.

Avatar photo

By Karina Nimară

Director of EU Policy and Head of Brussels Office - Karina previously served as Legal Advisor and Internal Market attaché at the Permanent Representation of Romania to the EU. Prior to her work with the Romanian diplomatic mission, Karina spent ten years in European Union affairs within the Romanian Government. While there she coordinated, inter alia, the process for transposition and implementation of EU legislation. Karina holds a law degree and specializes in EU law and policies. Based in the Alliance’s Brussels office, she's a tech enthusiast, enjoying the dawn of the Age of Artificial Intelligence. Other than robots, she's fascinated with cats and owls.

Related Content

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

A Busy Regulatory End of the Year in Europe 

A Busy Regulatory End of the Year in Europe 

Join the Alliance. Protect your interests.

©2020 Developers Alliance All Rights Reserved.