Generative AI (GenAI) tools hold significant potential to transform the legal profession. These tools can enhance efficiency, for example, by streamlining research, and speeding up legal document generation. While GenAI could revolutionize how lawyers and judges handle complex legal tasks, the legal field comes with high stakes that warrant caution. As legal decisions directly affect individuals’ rights, business outcomes, and societal well-being, the integration of GenAI into legal processed must entail ethical considerations and safeguards, particularly in courtrooms where fundamental rights are at risk.[1]
Different jurisdictions are adopting varying approaches to the use of generative AI in judicial decision-making. Below, we explore some of these strategies.
The European Union
The European Union’s Artificial Intelligence Act (AI Act)[2] is a key legal framework governing the use of artificial intelligence across member states. Designed to ensure safe and ethical AI deployment, the Act classifies AI systems based on their risk levels. In the context of legal decision making, one notable provision is the classification of certain AI tools used in the judicial process as „high-risk.”[3] Specifically, AI systems intended to assist judicial authorities in tasks such as researching facts, interpreting laws, or applying them to specific cases, as well as those used in alternative dispute resolution, are considered high-risk.[4] These systems must adhere to strict regulations outlined in the AI Act to ensure their safe and responsible use.
The United States
The US approach to AI regulation is particularly relevant to legal decision-making because it directly addresses the use of AI in contexts that impact legal processes, such as courts, criminal justice, and administrative decisions.[5] In 2022, the US introduced the Blueprint for an AI Bill of Rights,[6] a set of guidelines that lays out the core principles for the ethical deployment of AI, particularly in legal contexts. This document emphasizes that AI systems must be designed to prevent algorithmic discrimination, ensuring fairness in decision-making processes. Additionally, the Blueprint stresses the need for human alternatives and backup options to preserve accountability and trust in AI-driven decisions.[7] Building on principles outlined in the Blueprint, in 2023, the US government issued Executive Order No. 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.[8] This order reinforces the importance of preventing algorithmic discrimination and ensuring transparency and accountability in AI systems, particularly in the context of legal decision-making. It also prioritizes data privacy and safeguarding personal information, especially in legal environments, and emphasizes the need for human oversight in AI-driven processes to maintain fairness and equity, especially within the criminal justice system.
China
China is at the forefront of integrating AI into legal decision-making. In 2017, it launched the country’s first „smart court” in Hangzhou, marking the beginning of its AI-driven judicial innovations.[9] Since then, AI has been increasingly embedded in the legal system, with guide robots like „Yun Fan” in Xinyang and „Xiao Chong” in Chongzhou assisting with legal tasks such as answering legal questions, providing pre-trial predictions, and offering legal consultations.[10] Additionally, China has introduced AI judges that, under human supervision, have successfully adjudicated small claims.[11]
Canada
Canada has adopted a more cautious stance when it comes to using AI in the legal system. The Federal Court of Canada has made it clear that AI will not be used in judgments or orders until public consultations are conducted.[12]
The United Kingdom
In the UK, the “UK Guidance for Judicial Office Holders on Using AI” provides direction to judicial officers on responsible AI usage. The document highlights the importance of safeguarding confidentiality, thus advises against entering sensitive data into AI chatbots. Like New Zealand, the guidance urges judicial officeholders to verify AI-assisted material for accuracy before use and states that judges aren’t required to disclose research behind their judgments, and generative AI can be a useful secondary tool if used properly.[13]
Singapore
On the other hand, jurisdictions like Singapore have taken a more proactive approach, moving forward with the development and implementation of GenAI models to automate judicial decision-making in specific cases. [14]
New Zealand
New Zealand has developed the „Guidelines for the Use of Generative AI in Courts and Tribunals,” a comprehensive framework for judges and others working in the justice system.[15] These guidelines emphasize that capabilities and limitations of GenAI chatbots must be understood and caution users about AI’s potential inaccuracies, biases, and limited knowledge about local laws. They also warn users about not entering sensitive information into GenAI chatbots, about verifying the accuracy of the generated content and keep ethical considerations in mind. Despite these, based on the guidelines, judicial officers in New Zealand are not required to disclose their use of AI.[16] There also exists a separate guideline for lawyers, which asks them to be cautious when using GenAI chatbots due to their inherent risks,[17] and non-lawyers,[18] providing guidance in relation to New Zealand courts and tribunals.
Estonia
Estonia is actively exploring the integration of AI into its justice system, taking a proactive, coordinated, and government-led approach. The country is investing in GenAI research and development, ensuring rigorous data quality standards, and customizing the technology to meet the specific needs of its legal framework.[19] The Estonian Ministry of Justice has expressed significant interest in pursuing AI-driven initiatives to pinpoint areas where AI could offer valuable improvements, although GenAI tools have not yet been implemented in courts.[20]
Different Use Cases of AI in Legal Decision Making
AI is being explored for various roles within legal systems. In some jurisdictions, AI is used for routine tasks like note-taking during hearings or drafting legal documents such as contracts and court orders.[21] Some systems are also testing AI for case outcome predictions or assisting in small claims adjudication under human supervision.[22] While these tools can enhance efficiency, especially in low-stakes matters, there are concerns about their use in high-stakes decisions, raising questions of accuracy, fairness, and accountability.[23] These examples underscore the need for careful regulation and ethical oversight as AI continues to evolve in the legal field.
The Path Forward: Balancing Innovation and Justice
The integration of generative AI tools into legal decision-making holds immense promise, but the legal field must approach this transformation carefully. While various countries are experimenting with different use cases and regulatory frameworks, one overarching concern remains: ensuring that AI does not undermine justice and fairness. By prioritizing transparency and accountability, legal systems can utilize AI tools to enhance decision-making without compromising ethical standards. The successful integration of these tools into legal systems will ultimately depend on maintaining a balance between innovation and the core values of justice.
[1] David Uriel Socol de la Osa, and Remolina Leon Nydia, Artificial Intelligence at the Bench: Legal and Ethical Challenges of Informing—or Misinforming—Judicial Decision-Making through Generative AI, Data & Policy, Volume 6, e59, 2024. https://doi.org/10.1017/dap.2024.53, hereinafter: Socol de la Osa and Nydia, 2024.
[2] Regulation (EU) 2024/1689.
[3] André Guskow Cardoso, Elizabeth Chan, Luísa Quintão, Cesar Pereira, Generative Artificial Intelligence and Legal Decisionmaking, Global Trade and Customs Journal, Issue 11, 19, pp. 710-730, 2024. https://kluwerlawonline.com/journalarticle/Global+Trade+and+Customs+Journal/19.11/GTCJ2024081, hereinafter: Guskow Cardoso et al., 2024.
[4] William Fry, The Time to (AI) Act is Now: A Practical Guide to High-Risk AI Systems Under The AI Act, Lexology, 22.07.2024. https://www.lexology.com/library/detail.aspx?g=1c8e93dd-c2a1-490d-87ee-ef808682ea0b.
[5] Guskow Cardoso et al., 2024.
[6] The White House, Office of Science and Technology Policy, Blueprint for an AI Bill of Rights, 2022. https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
[7] Ibid.
[8] The White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 30.10.2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
[9] Guskow Cardoso et al., 2024.
[10] Ibid.
[11] Ibid.
[12] Socol de la Osa and Nydia, 2024.
[13] Guskow Cardoso et al., 2024.
[14] Socol de la Osa and Nydia, 2024.
[15] New Zealand Courts. Guidelines for the Use of Generative Artificial Intelligence in Courts and Tribunals – Judges, Judicial Officers, Tribunal Members and Judicial Support Staff, 7.12.2023. https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf.
[16] Guskow Cardoso et al., 2024.
[17] New Zealand Courts. Guidelines for the Use of Generative Artificial Intelligence in Courts and Tribunals – Lawyers, 7.12.2023. https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Lawyers.pdf.
[18] New Zealand Courts. Guidelines for the Use of Generative Artificial Intelligence in Courts and Tribunals – Non-lawyers, 7.12.2023. https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Non-Lawyers.pdf.
[19] Socol de la Osa and Nydia (2024).
[20] Ibid.
[21] AI Rapid Response Team at the National Center for State Court (NCSC), Artificial Intelligence Guidance for Use of AI and Generative AI in Courts, 7.8.2024. https://www.ncsc.org/__data/assets/pdf_file/0014/102830/ncsc-artificial-intelligence-guidelines-for-courts.pdf, hereinafter: NCSC 2024.
[22] Ibid.
[23] NCSC 2024.