Our Standpoints On The EC’s White Paper On AI
Update: The economic circumstances due to the Coronavirus crisis imposes a new perspective. Technology innovation represents the key solution for future economic growth. The EU policy measures proposed before this new context should be reconsidered, now more than ever, and our contribution is intended to support the change of stance.
Where’s The Innovation Principle?
We’re disappointed to see that instead of choosing a ‘smart’ policy and regulatory approach, as expected when it comes to technological progress, the EC decided to adopt strict measures driven exclusively by the precautionary principle.
Innovation doesn’t happen when acting with fear and surrounded by preventive measures. This mindset cannot create the appropriate environment for developers and entrepreneurs to innovate, and will stand in the way of the general uptake of AI solutions.
A wrong message is sent from Brussels, that technological progress should be slowed down in the EU. It’s contrary to the declared ambition to foster the use of AI technologies to strengthen the EU’s competitiveness.
Disproportionate Regulatory Approach
The objective to foster the uptake of AI (so important for the competitiveness of European SMEs!) is undermined by a disproportionate approach focused on the other main objective of the strategy – “addressing the risks associated with certain uses of this new technology”.
The White Paper is proposing regulatory options that are going beyond this objective, as they’re supposed to cover more that “certain uses”, and translate into complex legal requirements, complemented by a whole new set of administrative burdens, from the inception phases throughout the AI systems development life cycle.
The twin objectives of the strategy can be achieved by a more granular, fit for purpose regulatory approach.The proposal on an ex-ante and ex-post conformity assessment mechanism for “high-risk AI applications” raises a lot of concerns on its implementation. Topping-up the existing conformity assessment schemes or creating new ones requires a high-level of technical expertise and administrative capacity of the conformity assessment, accreditation and market surveillance bodies in all Member States. Mutual recognition across the Single Market should function smoothly. Due to the complexity of AI and the dynamic development in this area, the efficiency of such mechanisms is questionable (e.g. what should be the frequency of assessments in the case of continuous updating and optimization of the systems?). It is unclear how the requirement to “re-train the systems in the EU” in cases of non-conformity should be fulfilled in practice. We have similar concerns regarding the voluntary labelling scheme for “low-risk” AI applications. A rushed approach, ignoring the specific nature of software development will only stifle innovation, raise costs and delay the use of AI solutions in the EU.
Research and innovation need to be stimulated by concerted measures, from appropriate investment to regulatory sandboxes, but the latter is missing from the White Paper.
The Optimal Approach Starts With Semantics And Legal Clarity
AI systems are not inherently dangerous, but certain uses might be. We support a targeted approach, proposing specific regulation that will address those special situations when the use of certain AI applications might pose high risks for users, not “high risk AI applications”. Those situations should be clearly defined, in order to avoid legal uncertainty and overregulation.
Like any other technology, AI solutions could be misused and also could pose different types of risks in different situations. And like any other technology, those risks, once identified, should be tackled by different measures, including of regulatory nature. Due to the fact that AI technologies are usually embedded in products, as the AI White Paper also mentions, there are already applicable legal requirements aimed to ensure their safe placing on the EU market. Keeping the existing legislation updated to technological progress is the most appropriate solution to ensure legal certainty and clarity. It’s a complex exercise and the Developers Alliance is ready to contribute.
AI technologies could have dual use, or new forms of use could be discovered after their placing on the market, or other new, innovative solutions could be available, therefore the applicable legislation should be future-proof and technological neutral.
The Single Market Dimension
The Single Market dimension is highly relevant, for both sets of proposals, for the “ecosystem of excellence” and “the ecosystem of trust”. A harmonized approach in adapting the legal framework, governance, and in combating the digital divide within the EU is essential in achieving the proposed objectives.
Adding a new layer of red tape on the current fragmented landscape of the Single Market is not what the European developers and entrepreneurs need.
The actions related to the “ecosystem of excellence” should uniformly cover the EU and be supported by adequate financing.
It is important to ensure coherence between the policy and regulatory measures proposed by the different EU strategies, as their implementation will have an impact on the whole Single Market.
The Global Dimension
Many AI systems, like the large majority of software solutions, are developed in a collaborative environment, at global level. The developer community work is relying on open source resources. These aspects should not be overlooked when establishing new requirements affecting advanced software solutions deployed in the EU.