We have (non-European) ‘gatekeepers’ under EU’s DMA, and encryption is still under threat in both EU and UK

The September 2023 European Policy Update

Artificial Intelligence

The AI Act is closer to a final version as the legislative process enters the last mile. A large part of the proposal was agreed upon in drafting meetings. However, important aspects still need to be decided at the political level. The most contentious points relate to the ‘high-risk’ classification, bans on the public use of AI (the European Parliament insists on a total ban of real-time and remote biometric identification systems), special rules for generative AI, and certain requirements proposed by the European Parliament (fundamental rights and the rule of law; environment protection). Council’s position (representing EU Member States’ governments) is more balanced. It also makes reference to ‘general purpose AI’, compared to the Parliament’s, with specific rules on generative AI and foundational models. 

Developers Alliance presented a set of key points for the AI Act, reminding the EU lawmakers to keep the high-risk approach and regulate the use cases and not the technology as such. In the case of general-purpose AI and across value chains, AI developers, deployers, and end-users should have access to all the necessary information and documentation for compliance with the AI Act. Among others, we also insist that the specifics and benefits of free and open source should be recognized.

Various stakeholders published open letters, the most notable being a second open letter of European AI researchers calling for “less regulatory hurdles on open source AI in Europe.” Civil society and consumer organizations called for “closing a dangerous loophole in the AI,” with reference to Council’s amendment that excludes AI systems that are “purely accessory in respect of the relevant action or decision to be taken.” They also refer to a Parliament’s text that states that a system is ‘high risk’ only if it poses a significant risk to fundamental rights, health and safety, and if the developers consider their system does not pose such a risk, they must notify a national authority, which has 3 months to respond.

The UK will host the first major global summit on AI safety on November 1st – 2nd. The summit will gather “key countries, as well as leading technology organizations, academia and civil society,” with a focus “on risks created or significantly exacerbated by the most powerful AI systems, particularly those associated with the potentially dangerous capabilities of these systems.” The discussions are expected to set the framework for a forward process of international collaboration on frontier AI safety, to identify the areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance and to showcase how the safe development of AI will enable AI to be used for good globally.

Competition in digital markets

Six ‘gatekeepers’ were designated by the European Commission based on the Digital MArkets Act (DMA). The companies have six months to demonstrate their compliance with the DMA’s list of dos and don’ts for each of their designated core platform services. They will publish detailed compliance reports in March 2024.

Source: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4328

Gmail, Outlook.com, and Samsung Internet Browser meet the quantitative thresholds under the DMA, but Alphabet, Microsoft and Samsung provided justified arguments showing that these services aren’t narrow gateways for the respective core platform services. The Commission has opened four market investigations in order to assess Microsoft’s and Apple’s rebuttals for services that are meeting the thresholds: Bing, Edge and Microsoft Advertising, and Apple’s iMessage respectively. These investigations should be concluded within 5 months. The Commission has opened a separate market investigation to further assess whether Apple’s iPadOS qualifies as a core platform service, despite not meeting these thresholds. This investigation could take a longer period, up to 12 months. 

The European Commission published a report on the first preliminary review of the Platform-to-business (P2B) Regulation. The Regulation is applicable from July 12, 2020. It is the first general framework applicable to ‘online intermediation services’.  It was designed to benefit business users, in particular SMEs, that can have limited bargaining power relative to the online platforms. The requirements ensure a predictable business environment through enhanced transparency (e.g. on rankings) and fair treatment (e.g. suspension of the business account or products and services being blocked by the platform). App stores are in the scope of the regulation. 

The preliminary implementation report’s main findings are:

  • initial positive effects, while the regulation has not yet reached its full potential; 
  • a current lack of compliance by providers of online intermediation services, coupled with a lack of awareness among business users and smaller online platforms (especially with regard to the contractual rights and redress possibilities the regulation offers);
  • complementarity with other EU acts, such as the Digital Markets Act (DMA) and the Digital Services Act (DSA).

The report notes that “there’s a wide divergence in the number of complaints and their outcomes (12 complaints for Apple with only two overturned, and more than 3 million complaints for Google with 26% overturned),” which translates into a wide divergence in how providers of online intermediation services understand the notion of ‘complaint’. This raises questions on the level of compliance with the requirements related to complaint-handling. 

The European Commission wants to strengthen the implementation of the regulation through different measures. For example, it plans to address the lack of awareness by organizing workshops with business users, online intermediation services and other stakeholders from different sectors.

The General Court ruled that geo-blocking of activation keys for the Steam platform infringed EU competition law (Valve Corporation vs. Commission T-172/21). The Court notes that “copyright is intended only to ensure for the right holders concerned protection of the right to exploit commercially the marketing or the making available of the protected subject matter, by the grant of licenses in return for payment of remuneration. However, it does not guarantee them the opportunity to demand the highest possible remuneration or to engage in conduct such as to lead to artificial price differences between the partitioned national markets.”

The UK CMA has preliminary accepted the restructured transaction of Microsoft/Activision, after blocking the merger earlier this year. Under the new deal, Microsoft will no longer control cloud gaming rights for Activision’s content, as the cloud gaming rights held by Activision will be sold to an independent third party, Ubisoft Entertainment SA (Ubisoft), before the deal is completed. The CMA also found satisfactory that the deal with Ubisoft also requires Microsoft to port Activision games to operating systems other than Windows and support game emulators when requested. A final decision is expected to be made by October 18th. The European Commission cleared the previous deal by accepting as remedy the licensing of popular Activision games such as “Call of Duty” to rival game-streaming platforms. The Commission might review its decision in light of the new deal.

The CMA has published an initial review of the market of AI foundational models. The report is an initial assessment of the future development of this emergent market and the potential opportunities and risks, as well as guiding principles for the further assessment and intervention in the market: 

  • access (ongoing ready access to key inputs)
  • diversity (sustained diversity of business models, both open and closed) 
  • choice (sufficient choice for businesses so they can decide how to use FMs)
  • flexibility (to switch or use multiple FMs according to the need)
  • fair dealing (no anti-competitive conduct, including anti-competitive self-preferencing, bundling or tying)
  • transparency (both consumers and business users are given information about the risks and limitations of FMs generated content, empowering them to make informed choices).

Data protection & privacy

The UK has approved the UK-US Data Bridge, which will enable free flow of personal data to U.S. entities that are self-certified under the EU-U.S. Data Privacy Framework (DPF), provided those entities extend their DPF certification to cover UK data. 

Meanwhile, in the EU, French lawmaker Philippe Latombe is challenging the DPF before the European Union’s General Court. Max Schrems, the Austrian privacy activist lawyer who successfully contested the two previous transatlantic frameworks, is also expected to contest the European Commision’s adequacy decision

The EU’s Data Governance Act has entered into force. The regulation provides a framework for the re-use of certain categories of data held by public sector bodies. It also sets rules for providers of data intermediation services (so-called data intermediaries, such as data marketplaces), and creates conditions for data altruism. An explainer is available here.

TikTok was fined €345 million by the Irish Data Protection Commission for violation of child users’ data. The GDPR breaches were related to certain TikTok platform settings, including public-by-default settings as well as the settings associated with the ‘Family Pairing’ feature and age verification as part of the registration process. One of the findings is that TikTok implemented ‘dark patterns’ by nudging users towards choosing more privacy-intrusive options during the registration process and when posting videos. 

NOYB, the NGO led by Max Schrems, has filed complaints in France against three apps for illegal access and sharing of users’ data. The apps belong to Fnac (the largest electronics store in France), the real estate app SeLoger and the fitness app MyFitnessPal. The complaints argue that after installation on an Android smartphone, “ once opened, the applications immediately began to collect and share personal data, including Google’s unique Advertising ID (AdID), the model and brand of their device and local IP address with third parties.” They allege that users don’t have the choice to consent or prevent the sharing of their data. NOYB states that “such extensive data collection allows the profiling of users in order to show them personalized ads and marketing campaigns to increase the revenue for the mentioned companies.”

The UK’s Information Commissioner’s Office (ICO) and Competition and Markets Authority (CMA) have published a joint position paper entitled: Harmful design in Digital Markets. How Online Choice Architecture practices can undermine consumer choice and control over personal information. The document provides guidance for those who design digital online choice architecture “how data protection law applies to design practices” and what harmful practices should be avoided. 

Content regulation

The EU proposal for a regulation to prevent and combat online child sexual abuse (CSA Regulation) is in the legislative process, with the two co-legislators (Council and Parliament) being almost ready to adopt their positions. Developers Alliance joined 12 other tech industry associations in calling the EU lawmakers to safeguard encryption and limit detection orders. The draft regulation fails to exclude end-to-end encrypted content from mandated scanning. The regulation should also provide a framework for proactive voluntary measures for prevention and detection, while mandated targeted detection should be a last resort. Compliance with detection orders raises technical difficulties and a risk of general monitoring, particularly in the case of previously unknown CSAM and grooming.

Following a thorough journalistic investigation, the European Parliament’s LIBE Committee has sent a letter to Commissioner Ylva Johnson expressing “concern about recent reports published in press outlets” regarding certain civil society organizations and profits from CSAM detection software, and requesting “to receive clarifications and explanations concerning the allegations described above.”

The UK’s Online Safety Bill has passed the Parliament, and soon it will become law. It represents a comprehensive legal framework for content regulation, similar to the EU’s DSA on several aspects. It puts a focus on online child safety. The law will be enforced by Ofcom, the media regulator, which will be able to issue fines of up to 18 million pounds ($22.3 million) or 10% of online platforms’ annual global turnover. Certain rules requiring identification and removal of illegal content and tackling ‘legal but harmful’ content, are controversial, as it could require implementation of privacy-invasive measures. To appease numerous calls from various stakeholders, the government proposed an amendment during the final parliamentary debate stating that orders to scan user files “can be issued only where technically feasible,” as determined by Ofcom. UK Home Secretary Suella Braveman has confusingly called on Meta to “keep safeguards when rolling out end-to-end encryption on Facebook & Instagram.” 

The DSA Transparency Database is live. Under the DSA, all designated “very-large online platforms” and “very-large search engines” have to submit data to the Transparency Database detailing “statements of reason” for each content moderation decision.

As part of the EU Code of Practice on Disinformation, a comparative analysis of the prevalence and sources of disinformation across major social media platforms was published. On the occasion of a meeting with the signatories of the Code (44 signatories, including Facebook, Google, YouTube, TikTok or LinkedIn, but also the advertising industry and civil society), European Commission’s Vice-President Vera Jourová warned about the scale of Russian disinformation and the special attention needed for the upcoming national elections and the EU elections. She also noted that, according to the comparative study, “X, formerly Twitter, which is not under the Code anymore, is the platform with the largest ratio of mis/disinformation posts.”

Cybersecurity

EU lawmakers will soon start inter-institutional negotiations (so-called trilogues) on the Cyber Resilience Act proposal (CRA), after setting their positions. The draft regulation introduces mandatory cybersecurity requirements for the design, development, production, and making hardware and software products available on the market. Industry concerns relate to the scope and consistency with other legislation and problematic obligations, such as mandatory reporting of unpatched vulnerabilities. Like with other EU proposals (AI Act, revision of the Product Liability Directive), the CRA is expected to have a negative impact on the use and development of open-source software.

October is European Cybersecurity month.

Avatar photo

By Karina Nimară

Director of EU Policy and Head of Brussels Office - Karina previously served as Legal Advisor and Internal Market attaché at the Permanent Representation of Romania to the EU. Prior to her work with the Romanian diplomatic mission, Karina spent ten years in European Union affairs within the Romanian Government. While there she coordinated, inter alia, the process for transposition and implementation of EU legislation. Karina holds a law degree and specializes in EU law and policies. Based in the Alliance’s Brussels office, she's a tech enthusiast, enjoying the dawn of the Age of Artificial Intelligence. Other than robots, she's fascinated with cats and owls.

Related Content

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

Developers Alliance files Amicus Brief to Argue that Algorithms are Protected by the First Amendment

Developers Alliance files Amicus Brief to Argue that Algorithms are Protected by the First Amendment

Join the Alliance. Protect your interests.

©2023 Developers Alliance All Rights Reserved.