Italy Bans ChatGPT While the UK Takes a Pro-innovation Approach to AI Regulation

The March 2023 European Policy Update

Artificial Intelligence

The Italian Data Protection Authority (il Garante) has imposed a temporary limitation on the processing of Italian users’ data through ChatGPT.

The reasons are: 

lack of transparency for users and data subjects whose data are collected
apparent lack of legal basis for “the massive collection and processing of personal data” used for algorithm training
processing of inaccurate personal data, as the output is not always matching factual circumstances 
lack of age verification exposing children to inappropriate answers, even the T&Cs specify that the system is addressed to users above 13 years

Open AI, as a US company not established in the EU, but with a designated representative in the European Economic Area (EEA), doesn’t benefit from GDPR’s ‘one-stop-shop’ mechanism, so that it can be summoned by any data protection authority within the EEA. It will have to notify il Garante within 20 days of the measures implemented to comply with the order, risking a fine of up to EUR 20 million or 4% of the total worldwide annual turnover should Italy be able to assert legal jurisdiction. Sam Altman, the CEO of Open AI, has announced that the company “have ceased offering ChatGPT in Italy.”

The UK Government has unveiled its pro-innovation approach to AI regulation. The 2023 AI White Paper presents an innovative and iterative approach that relies on collaboration between government, regulators, and businesses. The UK government doesn’t intend to introduce new legislation right away, as it considers that by rushing to legislate too early, it “would risk placing undue burdens on businesses.” The proposed characteristics of the UK legal framework for AI are: pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative. The objective is “regulating the use – not the technology.” The government is supporting AI assurance techniques and technical standards as tools for trustworthy AI. The policy paper focuses on the measures needed to ensure” new central functions to undertake activities such as system-wide risk monitoring and evaluation of the AI regulation framework.” Stakeholders are invited to provide feedback until June 21th. 

The negotiations for the position of the European Parliament on the AI Act still need to overcome controversial points, such as scope, banned practices, obligations for providers of ‘high-risk AI systems’ and general purpose AI, enforcement, and governance. There seems to be a compromise on the definition of AI, to be aligned with the OECD’s. Special attention is given to regulatory sandboxes as an important tool to support startups and SMEs. The Parliament’s position is expected to be adopted by the end of April, which will allow it to start the negotiations with the EU Council. The governments of the Member States voted on their position in December last year.

Developers Alliance was invited to the European Parliament to present its perspective on the AI Act, specifically on tackling (foreseeable and unforeseeable) risks. The event was organized by MEP Brando Benifei, co-rapporteur for the AI Act, and has gathered policymakers, academics, representatives of the industry and civil society for a broad-ranging discussion on the EU’s approach to regulating AI related risks and liability. The Alliance’s contribution emphasized the need for a risk-based approach focused on use cases and for proper risk management processes for developing, deploying, and the use of AI systems. Due to the broad application of AI technologies, the AI Act will be implemented in complementarity with other pieces of legislation (e.g., GDPR, Digital Services Act, upcoming CyberResilience Act, consumer protection legislation, etc.), which are already addressing various types of risks.

Europol, the EU law enforcement agency, has published a report on the risks and opportunities of large language models (LLMs). The report presents the conclusions of a series of workshops with experts from across the organization, about the risks (how criminals can abuse LLMs), as well as opportunities (how it may assist investigators in their daily work).

The UK’s Data regulator (ICO) has published an updated Guidance on AI and data protection. The sections on accountability and governance implications of AI, fairness, and definitions were updated. A new section on transparency and a new annex on fairness in the AI lifecycle were added. ICO clarifies that the guidance is not a statutory code, it’s just intended to provide “advice on how to interpret relevant data protection law as it applies to AI, and recommendations on good practice for organizational and technical measures to mitigate the risks to individuals that AI may cause or exacerbate.”

Data Regulation

The EU co-legislators have started the negotiations on the Data Act. Both Council’s and the European Parliament’s positions extend the initial scope of the proposal (focused on IoT products) in an unclear manner. The Council, for example, explicitly targets “the functionalities of the data collected by connected products instead of the products themselves.” The proposed data sharing obligations also cover ‘related services’, which include “software and its updates.” Both positions also consider the protection of trade secrets and IP rights, stricter conditions on business-to-government data requests and limiting compensation to B2B context. Other critical points of the proposal relate to the conditions for switching between providers of cloud services and other data processing services, and international data transfer by cloud service providers.

The UK government has presented its second draft of the data protection reform legislation, the Data Protection and Digital Information (No.2) Bill. The No.2 Bill follows the first draft presented last year, without major differences. It proposes a simplified legal base for data processing, in the form of a limited list of “recognized legitimate interests.” Other important amendments include: flexible record-keeping on a risk-based approach, clarifications on automated decision-making, extending the scope of scientific research to any type of research pursued via a commercial or non-commercial activity, and maintaining the validity of transfer mechanisms implemented before the reform. The legislation is expected to be adopted in the first part of next year. 

Consumer Data Protection & Content Regulation

The Italian Competition Authority is carrying out an investigation on TikTok in relation to viral dangerous content (the ‘French scar’ challenge). According to the authority, which is also in charge of consumer protection, “TikTok has failed to implement appropriate mechanisms to monitor the content published by third parties” and to enforce its own Guidelines regarding the removal of dangerous content inciting suicide, self-harm and poor nutrition. Il Garante is also taking issue with TikTok’s algorithm, “which exploits user data to customize advertising and to show content similar to previously viewed and liked content.”

The French Data Protection Authority (CNIL) has announced its enforcement priorities for the current year. The investigations will focus on: the use of ‘smart’ cameras by public organizations, the use of personal credit incident reports, the management of health records, and mobile applications. With regard to mobile apps, CNIL will continue to scrutinize the use of identifiers following the update of its recommendations on the use of cookies and other trackers. 

A legislative proposal on digital consent and age verification is fast tracking in the French Parliament. Social media platforms will be required to implement technical solutions to verify users’ age and parental consent, which will be certified by audiovisual and privacy regulators, ARCOM and CNIL. A special amendment requires that parental consent for minors under the age of 13 should be allowed only for those platforms that will be labeled according to conditions to be established by a government decree. An additional amendment prescribes an obligation to collect parents’ authorization not only for the future accounts of children under 13, but also for current ones. Fines for non-compliance raise to up to 1 percent of their annual global turnover. The proposal, supported by the government, remains to be approved by the Senate.

The French government has banned the use of any “entertainment apps,” not only TikTok, on public servants’ work smartphones. An exception will be allowed only for official communication of an institution.

WhatsApp undertook commitments on transparent changes to its terms of service and respecting users’ choices. The measures will be implemented following a dialogue with EU consumer protection authorities and the European Commission (CPC network). WhatsApp will make it easier for users to reject updates when they disagree with them, and will provide clear explanations when such rejection leads the user to no longer be able to use WhatsApp’s services. Also, WhatsApp has confirmed that users’ personal data are not shared with third-parties or other Meta companies for advertising purposes. The scrutiny was triggered by  complaints of the European Consumer Organisation (BEUC) and eight of its member associations.

Meta announced a change in the legal base used for processing personalized advertisements in the EU, following the request of the Irish Data Protection Commission. In order to process certain first party data in the EU for the purpose of serving behavioral advertisement, Meta will rely on ‘Legitimate Interests’ instead of ‘Contractual Necessity’. The announcement recalls the fact that the GDPR doesn’t impose a hierarchy between legal bases, and therefore none should be considered more valid than any other. 

Noyb, the organization lead by Max Schrems, the well-known privacy activist, and the complainant in the cases before the Irish Data Commissioner, has announced the intention to contest the solution. Schrems considers that collecting users’ consent through an “opt-in” system is the only way to ensure compliance with the GDPR in this case.

Competition in digital markets

Developers Alliance participated in the 3rd DMA workshop organized by the European Commission. The workshop allowed interested stakeholders to discuss the implementation of the app-stores-related provisions of the Digital Markets Act, focusing  on in-app payment systems, steering, consumption-only apps, web-based apps, sideloading, alternative app stores and FRAND conditions of access to app stores. Developers Alliance presented a pragmatic perspective on the changes to app ecosystems, calling that these should benefit all app developers and not only narrow business interests and that the DMA should enhance and not degrade the current app ecosystems. A solution to avoid unintended consequences is that gatekeepers’ compliance solutions would be tested beforehand with developers and consumers. More details here.

The European Commission has approved the acquisition of Photomath by Google. Photomath is an app that provides free and premium services for online homework and study help, using a smartphone’s camera to scan and solve math problems. After investigating the merger’s impact on the markets for online homework and study help tools that include math as a subject offering, respectively, for general search services, the Commission concluded that it wouldn’t reduce competition. 

The German Competition Authority (Bundeskartellamt) has started a procedure to decide if Microsoft is a company “of paramount significance for competition across markets.” The special regime under section 19a of the German Competition Act allows the authority to ban certain commercial conduct considered as always anticompetitive for such companies (e.g., self-preferencing or pre-installation of services). Amazon, Google and Meta have already been designated to have paramount significance for competition across markets, while the procedure regarding Apple is in an advanced stage. 

The UK Competition Authority (CMA) has narrowed its scope of concerns for the Microsoft – Activision merger review. The CMA has concluded that the merger will not affect competition in console gaming services “because the cost to Microsoft of withholding Call of Duty from PlayStation would outweigh any gains from taking such action.” The investigation continues only with regard to the impact on the cloud gaming market. The final conclusion is expected by the end of April. The European Commission’s decision on the merger will be announced a month later.

The CMA has combined two investigations regarding alleged anti-competitive conduct of Google in ad tech. The investigation into suspected anti-competitive agreement between Google and Meta (the so-called “Jedi Blue” agreement) was combined with an investigation into Google’s conduct in relation to header bidding services due to the interconnection between the two. 


The governments of the United States, Australia, Canada, Costa Rica, Denmark, France, New Zealand, Norway, Sweden, Switzerland, and the United Kingdom have signed a joint statement on efforts to counter the proliferation and misuse of commercial spyware. The signatories commit to implementing strict measures of export control “of software, technology, and equipment to end-users who are likely to use them for malicious cyber activity, including unauthorized intrusion into information systems, in accordance with our respective legal, regulatory, and policy approaches and appropriate existing export control regimes.” They also pledge that any commercial spyware they use “is consistent with respect for universal human rights, the rule of law, and civil rights and civil liberties.” 

ENISA, EU’s Cybersecurity Agency, had published an assessment  tool for SME’s level of cybersecurity maturity. The tool includes an evaluation of the cybersecurity level of the organization and a remediation plan to improve their level following best practices. The assessment covers three key areas: the preparedness of their human resources, understanding and utilizing the technology, and implementation of the right processes to tackle cybersecurity risks. 

ENISA has also launched a site with comprehensive information on EU Cybersecurity certification. The voluntary schemes for certification can be used in ensuring compliance with relevant regulations such as the NIS2 Directive (cybersecurity standards for critical infrastructure), eiDAS and EU Wallet Regulation, the upcoming Cyber Resilience Act (CRA) (cybersecurity baseline standards attested by CE marking) and the AI Act.


The European Commission has proposed rules to facilitate cross-border business and increase transparency. Specifically, cutting red tape would be realized through: an EU Company Certificate containing a basic set of information about companies, thus avoiding re-submission of information (“once-only principle”), together with a multilingual standard model for a digital EU power of attorney which will authorize a person to represent the company in another Member State, and removing formalities such as the need for an apostille or certified translations for company documents. Transparency and exchange of accurate business information will be available through the Business Registers Interconnection System (BRIS). The proposal was submitted to the European Parliament and the Council, for adoption. 

The European Parliament is calling on the European Commission to adjust its Standardization Strategy. The report stresses the need to issue standardization requests only after carefully assessing the state of the existing relevant standards. Also, it recalls that “standards are voluntary, market-driven, non-legally binding.” It warns that “neither standards nor common specifications should address fundamental rights or socio-economic issues.” The EP is insisting on a better involvement of SMEs and improving coordination and engagement on an international level with “like-minded democratic partners.” Standards will play a significant role in the implementation of the upcoming AI Act, Cyber Resilience Regulation and the Data Act.

The European Commission has adopted two multiannual work programs for the Digital Europe Programme worth €1.3 billion. The main work program (€909.5 million), for the period of 2023 – 2024, covers the deployment of projects that use digital technologies such as data, AI, cloud, and advanced digital skills. The second program (€375 million), for the same period, is focused on cybersecurity. It will be implemented by the European Cybersecurity Competence Centre. The calls will open soon to businesses and public entities.

The European Commission has published a Concept Paper on the role of software and user experience in the EU’s automotive industry.

Avatar photo

By Karina Nimară

Director of EU Policy and Head of Brussels Office - Karina previously served as Legal Advisor and Internal Market attaché at the Permanent Representation of Romania to the EU. Prior to her work with the Romanian diplomatic mission, Karina spent ten years in European Union affairs within the Romanian Government. While there she coordinated, inter alia, the process for transposition and implementation of EU legislation. Karina holds a law degree and specializes in EU law and policies. Based in the Alliance’s Brussels office, she's a tech enthusiast, enjoying the dawn of the Age of Artificial Intelligence. Other than robots, she's fascinated with cats and owls.

Leave a comment

Your email address will not be published. Required fields are marked *

Related Content

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

Developers Alliance files Amicus Brief to Argue that Algorithms are Protected by the First Amendment

Developers Alliance files Amicus Brief to Argue that Algorithms are Protected by the First Amendment

Join the Alliance. Protect your interests.

©2023 Developers Alliance All Rights Reserved.