Contacts
Members area
Close

Contacts

Registered office:

1065 Budapest, Bajcsy-Zsilinszky út 31. I/11.

info@ceuli.org

Legal Challenges in the Age of Artificial Intelligence

ai-generated-7958926_1280

The rapid development of artificial intelligence (AI) has reshaped debates across law, policy, and society, creating both opportunities and profound legal challenges. Generative AI (GenAI) in particular, with its ability to create text, images, music, and other content, raises questions about authorship, copyright protection, and the fair remuneration of rights holders.[1] At the same time, the increased autonomy and opacity of AI systems test traditional frameworks of liability, causation, and legal personality.[2] Although we have already wrote about legal challenges of AI training from a copyright perspective,[3] as well as the question of legal personhood for AI,[4] two recent documents shed new light on these issues, approaching them from different but complementary perspectives. These two documents are the UK Law Commission’s AI and the Law Discussion Paper and the European Parliament’s (EP) draft report on Copyright and Generative AI. While the former explores broader liability concerns and the structural challenges AI poses to the legal system, the latter focuses on the specific implications of GenAI for intellectual property rights in the European Union (EU). Read together, they highlight main issues that the legal field faces when it comes to artificial intelligence.

Copyright in the Age of Generative AI

Generative AI distinguishes itself from earlier AI systems by producing original-seeming outputs rather than merely classifying or predicting.[5] These outputs often rely on large training datasets that include copyright-protected works, leading to widespread concerns about unauthorised use. According to the European Parliament, this practice undermines the economic sustainability of the creative industries, as rights holders are rarely remunerated and lack effective tools to prevent or monitor how their works are used in AI training.[6] Additionally, the mixed results of copyright litigations against GenAI companies for example in the United States and Germany shows the difficulties of proving infringement, or even establishing standing.[7]

In the EU, Article 4 of the Copyright in the Digital Single Market (CDSM) Directive[8] introduced a text and data mining (TDM) exception, permitting automated analysis of digital data to generate information.[9] However, this exception was neither designed nor intended to regulate AI training, and it remains unclear whether GenAI developers can lawfully rely on it.[10] As scholars note, the TDM exception is not effective for developing large language models (LLMs), and is too narrow in scope to properly apply to AI systems.[11] The EP stresses that this ambiguity must be resolved, proposing either a new dedicated exception for GenAI training or a revision of Article 4 to explicitly cover this use.[12] In both scenarios, rights holders must retain a robust right to opt out through standardised, machine-readable mechanisms.[13]

Transparency and Explainability

A recurring theme across both documents is the demand for greater transparency. For the EP, transparency means obliging AI providers to disclose all copyright-protected content used in training, ideally through itemised lists managed by the European Union Intellectual Property Office (EUIPO).[14] Such a mechanism would enable rights holders to assert claims, enforce opt-outs, and ensure that their works are not unfairly appropriated.

The Law Commission approaches transparency from a different angle, highlighting the “black box” nature of AI models.[15] Machine learning systems, particularly deep neural networks, often produce outputs whose reasoning is opaque even to their developers. This opacity complicates liability determinations, as it may be impossible to establish why a system produced a harmful output or whether the harm was foreseeable.[16] The Commission notes that this lack of explainability is not simply a technical issue but a legal one, as it undermines traditional tests of causation and responsibility.

In both contexts, transparency is framed as a precondition for accountability. For copyright, it ensures rights holders can monitor and enforce their entitlements. For liability more broadly, it provides courts and regulators with the information necessary to assign responsibility when harms occur.

Liability and Legal Personality

The Law Commission devotes significant attention to liability, warning of potential “liability gaps” in cases where autonomous AI systems cause harm.[17] Traditional tort and criminal law assume that liability attaches to natural or legal persons. But when AI systems act autonomously, it may be unclear whether developers, deployers, or end-users should be held responsible.[18] Despite the lack of clarity in some respect, many legal scholars argue that AI-based systems should be viewed as products, where the legal responsibility rests with humans.[19]

For instance, if a generative AI system disseminates defamatory content or biased hiring recommendations, liability might fall on different actors along the AI supply chain – from data providers to software developers to end-users.[20] The Commission underlines that existing private law doctrines, such as negligence or product liability, may not always be sufficient to capture these complexities.[21]

The European Parliament’s report indirectly engages with this question by proposing an “irrebuttable presumption” of use: if AI providers fail to comply with transparency obligations, courts should presume that copyrighted material has been used for training.[22]  This approach places responsibility squarely on AI providers, reducing the evidentiary burden on rights holders and addressing asymmetries in information access.

Both perspectives converge on the need to clarify liability rules in order to maintain legal certainty. Yet they also differ in emphasis: the EP focuses on protecting rights holders within the copyright ecosystem, while the Law Commission addresses broader risks of accountability gaps in tort and criminal law.

Authorship and Human-Centric Protection

Another major area of convergence is the insistence on human authorship as the foundation of copyright law. Under established EU jurisprudence, a “work” requires originality reflecting the author’s own intellectual creation.[23] Since AI systems lack human intent or creativity, their outputs cannot qualify as copyright-protected works.[24]

The EP goes further, recommending that AI-generated content should remain definitively ineligible for copyright protection, thereby preserving the public domain status of such outputs.[25] This prevents AI providers from monopolising cultural production while ensuring that intellectual property continues to reward human creators.

The Law Commission questions whether AI systems should ever be granted legal personality or authorship. Similarly, some legal scholars – despite approaches likening AI personhood to corporate personhood – advocate treating AI systems as products.[26] While acknowledging the novel challenges posed by AI, the Commission cautions against premature reforms that could undermine accountability by granting rights to non-human entities.[27] Instead, it suggests that legal systems should remain firmly grounded in human responsibility, even as AI becomes increasingly sophisticated.

Innovation and Legal Safeguards

Both reports stress that Europe must balance innovation with robust legal safeguards. The Parliament warns that without clear legal conditions for AI training, Europe risks falling behind global competitors, jeopardising its technological sovereignty.[28] At the same time, it insists that copyright protections must not be weakened in pursuit of innovation, since fair remuneration for creators is essential for a sustainable cultural ecosystem.[29]

The Law Commission echoes these concerns, noting that legal uncertainty can impede innovation by deterring investment and making insurance coverage difficult to obtain.[30] It emphasises that reform must aim to clarify responsibility without unnecessarily constraining the beneficial use of AI. Legal frameworks should thus function as enablers rather than obstacles, providing clear guardrails that foster trust in the technology.[31]

In both accounts, innovation and protection are not opposing imperatives but mutually reinforcing goals. Clearer rules on liability and copyright enable responsible innovation by reducing uncertainty and ensuring that the benefits of AI development are shared fairly.

Conclusion and Outlook

The Law Commission and the European Parliament both underline the profound transformation AI is bringing to legal doctrine. While their emphases differ – the former focusing on liability and the latter on copyright – their analyses converge on key principles: transparency, accountability, and the preservation of human authorship.

Looking ahead, a coherent framework for AI governance in Europe will require integration of these perspectives. Copyright reform must clarify the lawful use of works for AI training while ensuring fair remuneration for rights holders. Liability frameworks must adapt to prevent accountability gaps without undermining innovation. Above all, both must remain anchored in the principle that AI systems, however advanced, are tools, thus the responsibility for their design, deployment, and consequences must remain with humans.


[1] European Parliament. Committee on Legal Affairs – Draft Report on Copyright and Generative Artificial Intelligence: Opportunities and Challenges (2025/2058(INI)), 27 June 2025 (hereinafer: European Parliament Report).

[2] Law Commission. AI and the Law: A Discussion Paper. 2025 (hereinafter: Law Commission Discussion Paper).

[3] See our previous analysis here: https://ceuli.com/navigating-the-copyright-minefield-legal-challenges-of-ai-training-and-content-use/.

[4] See our previous analysis here: https://ceuli.com/legal-personhood-for-ai-challenges-and-future-possibilities/.

[5] European Parliament Report.

[6] European Parliament Report.

[7] For Germany, see: IPTechBlog, Dr. Sandra Mueller. Breaking News from Germany: Hamburg District Court Breaks New Ground with Judgment on the Use of Copyrighted Material as AI Training Data. 11.10.2024. https://www.iptechblog.com/2024/10/breaking-news-from-germany-hamburg-district-court-breaks-new-ground-with-judgment-on-the-use-of-copyrighted-material-as-ai-training-data/; for the United States, see: Sasha S. Rao and Todd M. Hopfinger. “Newsrooms vs Neural Nets: How Courts Are Handling DMCA Claims against GenAI.” Reuters, 27 Aug. 2025, https://www.reuters.com/legal/legalindustry/newsrooms-vs-neural-nets-how-courts-are-handling-dmca-claims-against-genai-2025-08-27/. Another similar case is pending in the United Kingdom, see: Penningtons Manches Cooper, Generative AI in the courts: Getty Images v Stability AI, 16.02.2024. https://www.penningtonslaw.com/news-publications/latest-news/2024/generative-ai-in-the-courts-getty-images-v-stability-ai.

[8] European Union. Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on Copyright and Related Rights in the Digital Single Market and Amending Directives 96/9/EC and 2001/29/EC. PE/51/2019/REV/1, Official Journal of the European Union, L 130, 17 May 2019, pp. 92–125, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32019L0790.

[9] European Parliament Report.

[10] European Parliament Report.

[11] Novelli, Claudio, Federico Casolari, Philipp Hacker, Giovanni Spedicato, and Luciano Floridi. “Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.” Working Paper, arXiv, 2024, https://arxiv.org/pdf/2401.07348.

[12] European Parliament Report.

[13] European Parliament Report.

[14] European Parliament Report.

[15] Law Commission Discussion Paper.

[16] Law Commission Discussion Paper.

[17] Law Commission Discussion Paper.

[18] Law Commission Discussion Paper.

[19] Cheong, Ben Chester. “Granting Legal Personhood to Artificial Intelligence Systems and Traditional Veil-Piercing Concepts to Impose Liability.” N Social Sciences, vol. 1, 2021, https://doi.org/10.1007/s43545-021-00236-0.

[20] Law Commission Discussion Paper.

[21] Law Commission Discussion Paper.

[22] European Parliament Report.

[23] European Parliament Report.

[24] European Parliament Report.

[25] European Parliament Report.

[26] Cheong (2021); Avila Negri, Sergio M. C. “Robot as Legal Person: Electronic Personhood in Robotics and Artificial Intelligence.” Frontiers in Robotics and AI, vol. 8, 2021, article 789327, https://doi.org/10.3389/frobt.2021.789327. See also: European Parliament. Artificial Intelligence and Civil Liability. Study No. PE 621.926, 2020, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf. 

[27] Law Commission Discussion Paper.

[28] European Parliament Report.

[29] European Parliament Report.

[30] Law Commission Discussion Paper.

[31] Law Commission Discussion Paper.

Leave a Comment

Az e-mail címet nem tesszük közzé. A kötelező mezőket * karakterrel jelöltük