Contacts
Members area
Close

Contacts

Registered office:

1065 Budapest, Bajcsy-Zsilinszky út 31. I/11.

info@ceuli.org

ChatGPT Under Fire: Italy’s €15 Million GDPR Fine Against OpenAI and Its Implications

michele-bitetto-dacdEbdqjpk-unsplash

The Italian Data Protection Authority (Garante) has imposed a €15 million fine on OpenAI for multiple General Data Protection Regulation (GDPR) violations related to its flagship AI chatbot, ChatGPT.[1] This significant regulatory action, finalized in December 2024, marks one of the most substantial penalties against an AI company in Europe and underscores the intensifying scrutiny over artificial intelligence technologies and their data privacy implications. Beyond the monetary penalty, OpenAI has been ordered to conduct a six-month public information campaign across Italian media platforms, creating a precedent for how regulatory authorities may address AI compliance issues in the future.

The Investigation’s Origins and Evolution

The Garante’s investigation into OpenAI began in March 2023 following a data breach that exposed some users’ chat histories and partial payment details.[2] While the breach itself was relatively minor, it drew the regulator’s attention to broader alleged compliance issues within the company. The investigation quickly escalated, with the Garante taking the unprecedented step of imposing an „immediate temporary limitation” on ChatGPT’s use in Italy, effectively banning the service from the country.[3]

This initial ban cited several privacy violations, including the „mass collection and storage of personal data for the purpose of ‘training’ the algorithms,” failure to properly report the data breach, and inadequate age verification mechanisms that potentially exposed minors to inappropriate content. The restriction, however, proved short-lived. Within a month, ChatGPT access was reinstated after OpenAI agreed to implement specific transparency and compliance measures.[4]

Despite these remedial actions, the Garante continued its investigation throughout 2023 and 2024. By January 2024, the authority formally initiated proceedings against OpenAI, culminating in the December 2024 decision to impose the €15 million fine.

Specific GDPR Violations Identified

The Garante’s investigation revealed multiple serious violations of the GDPR. Most critically, the authority found that OpenAI processed users’ personal information to train ChatGPT „without having an adequate legal basis” to do so.

This fundamental violation strikes at the heart of GDPR compliance, which requires companies to establish clear legal grounds for any processing of personal data. Additionally, the investigation found that OpenAI failed to fulfill its „transparency obligations towards users”. Under GDPR Articles 12 and 13, organizations must provide clear and comprehensive information about how they collect, use, and store personal data. The Garante determined that OpenAI’s privacy notices and disclosures fell short of these requirements.

Another significant finding concerned the lack of robust age verification mechanisms. The investigation revealed that OpenAI „did not implement a suitable age verification system” to restrict access for users under 13 years old, potentially exposing children to AI-generated content inappropriate for their developmental stage.[5]

Furthermore, the Garante found that OpenAI failed to notify authorities about the March 2023 data breach within the mandatory 72-hour timeframe required by GDPR Article 33.[6] The investigation also identified violations of the „data protection by design and by default” principle (Article 25) and the general compliance obligation (Article 24).

Beyond the Fine: Mandated Public Awareness Campaign

In an unprecedented move, the Garante invoked Article 166 of the Italian Privacy Code to require OpenAI to conduct a six-month institutional communication campaign across radio, television, newspapers, and online platforms. This campaign must educate both users and non-users about how ChatGPT processes personal data and the rights individuals have regarding their information under GDPR.

Specifically, the campaign must explain the nature of data collected (from both users and non-users), how this information is used to train AI models, and how individuals can exercise their rights to object to this use, rectify incorrect data, or delete their information entirely. This requirement represents a significant development in regulatory enforcement, focusing not just on punitive measures but on improving public understanding of AI data practices.

The Garante stated that „through this communication campaign, users and non-users of ChatGPT will have to be made aware of how to oppose generative artificial intelligence being trained with their personal data and thus be effectively enabled to exercise their rights under the GDPR”. OpenAI must collaborate with the authority to finalize the campaign’s content and implementation.

OpenAI’s Response and Next Steps

OpenAI has characterized the €15 million fine as „disproportionate,” noting that it is „nearly 20 times the revenue we generated in Italy during the relevant timeframe”. The company has confirmed its intention to appeal the decision. In its public statements, OpenAI has emphasized its cooperation with the Garante throughout the investigation. The company noted, „When Garante instructed us to cease operations of GPT in Italy in 2023, we collaborated with them to reinstate it a month later. They have since acknowledged our pioneering efforts in safeguarding privacy within AI”. OpenAI further stressed its commitment to working with global privacy authorities to „provide beneficial AI that honors privacy rights”.

Since the initial investigation began, OpenAI has established its European headquarters in Ireland, potentially transferring jurisdiction over ongoing compliance matters to Ireland’s Data Protection Commission under GDPR’s one-stop-shop mechanism. This strategic move could impact how future regulatory actions against the company unfold within the European Union.

The Broader Regulatory Context

The fine against OpenAI is part of a significant wave of regulatory scrutiny facing AI technologies globally, particularly in Europe. The case unfolds against the backdrop of the European Union’s AI Act, a comprehensive regulatory framework for artificial intelligence systems.

December 2024 saw several other major GDPR enforcement actions across Europe. Netflix received a €4.75 million fine from the Dutch Data Protection Authority for inadequate privacy notices, Meta faced a massive €251 million fine from Ireland’s Data Protection Commission for a security vulnerability in Facebook’s „View-As” function, and France’s CNIL fined KASPR €240,000 for unlawfully collecting contact data from LinkedIn profiles without user consent. [7]

The €15 million penalty represents approximately 1.5% of OpenAI’s global annual revenues for the relevant period, placing it among the most significant fines issued under GDPR for AI-related violations. However, it falls well short of the maximum possible penalties under GDPR, which can reach up to 4% of global annual turnover.

Implications for AI Regulation and Compliance

The Garante’s action against OpenAI signals a maturing approach to AI regulation in Europe. Regulators are moving beyond theoretical concerns to concrete enforcement actions with significant financial implications. This case establishes important precedents for how data protection authorities may address compliance issues in generative AI systems.

For AI companies, the case underscores the importance of establishing clear legal bases for data processing, implementing robust transparency mechanisms, and developing effective age verification systems. The fine highlights that the rapid pace of AI innovation does not excuse compliance shortcuts regarding fundamental data protection principles.

The required awareness campaign also suggests a regulatory focus on empowering individuals to understand and control how their data is used in AI systems. This emphasis on user rights and education may become a template for future regulatory actions in the AI space.

As regulatory frameworks continue to evolve, companies developing and deploying AI technologies must prioritize privacy and data protection compliance from the design phase. The Garante’s action demonstrates that retrofitting compliance after market launch may be costly both financially and reputationally.

The Italian regulator’s proactive approach may inspire similar actions from other European data protection authorities, potentially leading to a more coordinated EU-wide approach to AI regulation. As OpenAI’s appeal unfolds and the awareness campaign takes shape, the case will undoubtedly continue to influence how organizations approach AI development and regulatory compliance in Europe and beyond.


[1] https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/10085432

[2] https://syrenis.com/resources/blog/chatgpt-gdpr/

[3] https://www.lewissilkin.com/insights/2025/01/14/openai-faces-15-million-fine-as-the-italian-garante-strikes-again-102jtqc

[4] https://thehackernews.com/2024/12/italy-fines-openai-15-million-for.html

[5] https://www.newsweek.com/openai-fined-lawsuit-italian-privacy-watchdog-fined-chatgpt-2004346

[6] https://cyberpress.org/fine-italt-open-ai/

[7] https://www.linkedin.com/posts/martin-zwick-a49258214_top-5-gdpr-fines-in-december-2024-1-italy-activity-7283014728501657600–dxE/

Leave a Comment

Az e-mail címet nem tesszük közzé. A kötelező mezőket * karakterrel jelöltük