The November & December 2023 European Policy Update
Artificial Intelligence Act
EU lawmakers have struck a deal on the AI Act after 36 hours of intense negotiations over three days. The text reflecting the provisional political agreement will be finalized in the coming period by experts of the three institutions involved (The European Commission, who presented the proposal back in April 2021, and the two co-legislators: The European Parliament and the Council of the EU – representing the governments of EU Member States).
The main elements of the agreement announced by the lawmakers are as follows:
- Banned AI applications:
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
- emotion recognition in the workplace and educational institutions
- social scoring based on social behavior or personal characteristic
- biometric categorization systems inferring sensitive data (e.g., political, religious, philosophical beliefs, sexual orientation, race)
- cognitive behavioral manipulation
- certain cases of predictive policing for individuals.
- Law enforcement exemptions are subject to safeguards for using biometric identification systems in public spaces.
- High risk AI systems must comply with mandatory requirements and certification, including a fundamental rights impact assessment, for entering the EU market. There will be provisions tackling the allocation of responsibilities and roles of the various actors across value chains, in particular, providers and users of AI systems.
- Specific transparency obligations, deep fakes, and other AI-generated content will have to be labeled, and users of an emotion recognition system will have to inform natural persons when they are being exposed to such a system.
- A specific regime for General-purpose AI systems and foundation models involves obligations such as presenting technical documentation, complying with EU copyright law, and disseminating detailed summaries about the content used for training.
- A stricter regime for “high-impact” GPAI and foundation models “with systemic risk” (if they meet certain criteria) will require model evaluations, assessment and mitigation of systemic risks, adversarial testing, reporting to the Commission on serious incidents, enhanced cybersecurity, and reporting on their energy efficiency.
- The governance for the enforcement of the regulation includes setting up an AI Office within the European Commission, which will mainly be responsible as a regulator for the “most advanced AI models,” supported by a scientific panel of independent experts, and an AI Board, which would comprise member states’ representatives and which will receive technical expertise from an advisory forum for stakeholders.
- The fines were set at €35 million or 7% of the company’s global annual turnover for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. The agreement promises “more proportionate caps” for administrative fines for SMEs and start-ups.
Ahead of the final round of political negotiations, the Developers Alliance has co-signed a joint industry statement alongside five other tech industry associations, warning about the latest proposals that did not take into account the complexity of the AI value chain and were not consistent with the AI Act’s technology-neutral risk-based approach. In particular, the statement emphasized that:
- The proposed classification of GPAIs and AI Foundation Models as “highly capable” or as having “high impact” is based on criteria that are not clearly linked to the level of risk that an AI system developed with GPAIs and Foundation Models may pose.
- The AI Act should formalize a mechanism to ensure the sharing of relevant and necessary information between Foundation Model providers and deployers, for instance, about model capabilities and limitations.
- The existing EU legal framework already covers the use of copyrighted data to train AI systems and introducing any additional requirements through the AI Act is not a viable legal solution.
The provisional political agreement is waiting for formal approval by the European Parliament and the Council of the EU. The AI Act will enter into force 20 days after publication in the Official Journal and become applicable two years after it enters into force, except for some specific provisions (bans will apply after 6 months, and while the rules on General Purpose AI will apply after 12 months).
The European Commission has launched a call for interest for companies to join the AI Pact. It is a scheme allowing the voluntary commitment of industry to anticipate the AI Act and to start implementing its requirements before the legal deadline.
EU lawmakers have reached a political agreement for the Cyber Resilience Act which will have a significant impact on software development and maintenance. Hardware and software will have to bear the CE mark to indicate that they comply with the Regulation’s requirements as a condition to be sold in the EU. Software developers and hardware manufacturers of “products with digital elements” (from connected doorbells to baby monitors and Wi-Fi routers) will have to “implement cyber security measures across the entire lifecycle of the product, from the design and development, to after the product is placed on the market.” The regulation requires, inter alia, that security updates be applied automatically and separately from functionality ones. In addition, security incidents and actively exploited vulnerabilities must be reported to the competent authorities, together with the mitigation actions. The industry warned about the risks of disseminating information about unpatched actively exploited vulnerabilities (see the latest joint industry statement co-signed by Developers Alliance). The compromise partially addresses these concerns by introducing some safeguards for the transmission of information.
Another controversial aspect during the legislative process was related to the scope of the regulation in relation to open source. The regulation will apply to software developed in the context of commercial activities, with open source contributors being exempted from compliance obligations. Non-profit organizations that sell open source software on the market but reinvest all the revenues in not-for-profit activities were also excluded from the scope (as reported by Euractiv).
The political agreement needs to be formally endorsed by both the European Parliament and the Council. The Cyber Resilience Act will enter into force on the 20th day following its publication in the Official Journal, with a period of 36 months for the manufacturers, importers and distributors of hardware and software products to adapt to the new requirements. There is,however, only a 21 month grace period for the obligations related to reporting incidents and vulnerabilities.
The UK has also adopted new product safety requirements for connected products, which will apply from April 2024. They build upon an existing voluntary code of conduct, and are set out in the Product Security and Telecommunications Infrastructure Act 2022 (also known as the PSTI). The PSTI introduces product security requirements for connected products (including IoT devices such as smart speakers, connected devices, and certain products used to operate computers) and separately updates the UK’s telecommunications infrastructure regime. The UK’s regime is similar to the EU’s Cyber Resilience Act.
Data Protection and Privacy
The European Data Protection Board (EDPB) has issued a binding decision on the processing of personal data for behavioral advertising by Meta. The decision instructed the Irish DPA as lead supervisory authority to take, within two weeks, final measures regarding Meta and to impose a ban on the processing of personal data for behavioral advertising on the legal bases of contract and legitimate interest. The decision followed a request from the Norwegian Data Protection Authority and therefore, the final measures would have an effect in the entire European Economic Area (EEA).
The privacy activist organization NOYB and the European consumer organization BEUC filed complaints against Meta’s recent introduction of the pay-or-consent model for its services. NOYB’s complaint was filed with the Austrian data protection authority about “another attempt to circumvent EU privacy laws,” while BEUC’s filing was with the network of consumer protection authorities (CPC) concerning unfair commercial practices.
The EDPB has published a data protection guide for small businesses, very useful for both those already well up on their obligations under the GDPR and those still struggling with compliance.
The EDPB has also adopted Guidelines on the technical scope of Art. 5 (3) of the ePrivacy Directive, with the aim of clarifying which technical operations, particularly new and emerging tracking techniques, are covered by the Directive. The EDPB’s disclaimer for the guidelines mentions that they “do not address how consent should be collected, or the exemptions set out in the article.”
The Information Commissioner’s Office (ICO) has warned the top websites in the UK to comply with the data protection law. ICO has warned that it will use its enforcement powers in 30 days if the websites do not make it as easy for users to “Reject All” advertising cookies as it is to “Accept All.”
The Italian data protection authority has opened an investigation into the collection of personal data online to train algorithms. A fact-finding investigation was launched “to verify the adoption of suitable security measures to prevent the massive collection (webscraping) of personal data for the purposes of training artificial intelligence algorithms (IA) by third parties.” The investigation concerns all public and private entities.
The European Parliament has set out a balanced position on the CSAM Regulation. The plenary has endorsed the position adopted last month at the committee level. The Parliament proposes safeguards for preserving E2E encryption and excluding client-side scanning, as well as restricted detection and removal orders. It also proposes mandatory age verification systems for porn platforms and that services targeting children should require;, user consent for unsolicited messages, have blocking and muting options, and boost parental controls, all by default. The other co-legislator, the Council, representing the EU Member States’ governments, has yet to adopt its position.
The privacy activist organization NOYB has filed a complaint against the European Commission over the targeted advertising campaign to promote its proposal for the CSAM regulation.
The EU Court of Justice has ruled against the Austrian online content law, as contrary to EU law (Case C-376/22 Google Ireland & Others). The Court reiterated the country-of-origin principle of the e-commerce Directive (and also of its successor, the Digital Services Act), which ensures that digital service providers are subject to the enforcement of the authority of the Member State where they are established.
The Digital Services Terms and Conditions Database was launched by the European Commission as a transparency tool in support of the implementation of the Digital Services Act. It allows tracking terms and conditions of online platforms such as social media, app stores or marketplaces. The database is built with open-source software and currently covers 790 terms and conditions from various service providers.
The European Commission has sent a series of formal information requests under the Digital Services Act. Several companies were requested to provide detailed information on how they comply with their obligations, related to risk assessments and mitigation measures to protect minors online (TikTok, YouTube, Meta, Snap), and to protect consumers online, in particular about the dissemination of illegal products online (Amazon, AliExpress).
Competition in Digital Markets
The gatekeeper designation decisions under the DMA were contested by TikTok, Meta and Apple. The designated gatekeepers have until March 6th to comply with the new rules. The appeals have no suspensive effect, meaning that the three above-mentioned companies still need to comply on the due date.
The European Commission has sent a statement of objection to Adobe over the proposed acquisition of Figma. After an in-depth investigation, the Commission reached the preliminary conclusion that the transaction may significantly reduce competition in the global markets for “the supply of interactive product design tools where Figma is the clear market leader and Adobe one of its largest competitors,” and also “for the supply of vector editing tools and supply of raster editing tools, by eliminating Figma as a potential competitor, thereby strengthening Adobe’s dominance in these markets.”
UK’s Competition and Markets Authority (CMA) also shared concerns about Adobe’s deal to buy Figma. It has provisionally found that the transaction “would likely harm innovation for software used by the vast majority of UK digital designers.”
The CMA has presented its initial review on AI foundation models, reflecting an ongoing broad stakeholder consultation. An updated report will be presented in March 2024, with a focus on the market developments and their impact on competition and consumers, how foundation models developers are accessing key inputs, such as expertise, data and computing power, including through investments, mergers, acquisitions and partnerships, and on the role AI semiconductor chips (‘AI chips’) play in the value chain.
The UK government has introduced amendments to the Digital Markets, Competition, and Consumers Bill regarding the review process and ensuring regulatory intervention’s proportionality. The administrative appeals process is maintained for all regulatory decisions (except fines), based on judicial review principles. The approach was justified by the need to avoid “legal challenges to cause the regime to get bogged down in the courts.” The amendments also provide that the regulator cannot impose an intervention on a firm unless it is proportionate to do so.
The UK Court of Appeal has upheld CMA’s decision to open a market investigation into cloud mobile browsers and cloud gaming. Previously, Apple successfully challenged the decision’s legality before the Competition Appeal Tribunal’s (CAT).
Germany’s Federal Cartel Office (Bundeskartellamt-BKA) has decided that the Microsoft and Open AI partnership is not subject to merger control. The BKA stated that “if Microsoft were to increase its influence on OpenAI in the future, it would have to be re-examined whether a notification obligation exists under competition law.” The decision was issued before the recent governance crisis at Open AI.
EU co-legislators have endorsed the agreement on the Data Act. The regulation will allow users of connected devices, ranging from smart household appliances to intelligent industrial machines, to give access to the data generated by their use to third parties (for aftermarket services). The political agreement promises “an adequate level of protection of trade secrets and intellectual property rights, accompanied by relevant safeguards against possible abusive behavior.” The regulation contains additional conditions for business-to-government data sharing. The Data Act also sets out rules intended to ease switching between cloud service providers, and to prevent unlawful transfer or access to non-personal data by third countries. The Data Act will enter into force on the 20th day following its publication in the Official Journal and will become applicable 20 months after the entry into force.
The State of European Tech 2023, the annual comprehensive report of the European startup ecosystem, (re)confirms Europe’s untapped potential. While still under the impact of a global slowdown, the European ecosystem has regained its value to $3 trillion. Although funding has nearly halved from 2022 and is harder to access than in the US, Europe is home to more new startups. Europe still struggles with scaling up, the report showing that after five years, US tech startups are 40% more likely to have successfully secured venture capital funding than the European ones. US investors are retracting, which calls for more local capital. Layoffs have stabilized, and early-stage companies account for almost double the number of new additions to the tech industry. The European tech sector attracts global talent at a high rate, and it also outdoes the US in terms of the annual volume of founders starting new tech startups. Despite having a good pipeline of exit candidates, Europe doesn’t offer a promising perspective, especially for IPOs.
The European Commission has announced that it will open access “for ethical and responsible AI start-ups” to train their models using European supercomputers. This is part of the EU AEU AI Start-Up Initiative. The European High Performance Computing Joint Undertaking (EuroHPC JU) has opened a call for submissions until 27 February 2024. The selected projects are expected to start in September 2024 for a duration of 36 months.
A new set of calls under the Horizon Europe Digital, Industry, and Space work programme offers €290 million in funding for research projects supporting European competitiveness, in AI, data and robotics, and for the development of cloud to edge servers.
The European Parliament has adopted a resolution on digital worlds. The MEPS warns about the risks related to mental health (especially of children), data protection, intellectual property rights, cyber violence, financial fraud, and environmental impact. They also want “to reduce technological dependencies on third countries and support EU businesses.”
The political agreement on the EU Digital Identity Wallet is awaiting endorsement. The Parliament’s ITRE Committee was supposed to endorse the agreement on November 28th, but the vote was postponed until the text related to EU web certificates will be clarified. The controversy mainly relates to the technical means which could allow governments’ access to intercept encrypted web traffic. An open letter signed by 551 scientists and researchers from 42 countries, as well as numerous NGOs, raises several concerns. It is not yet clear if the political agreement addresses these.
The EU and Canada have launched a Strategic Digital Partnership, to collaborate in different areas, especially on international connectivity, artificial intelligence, cybersecurity, and online platforms regulation.