As artificial intelligence (AI) systems become increasingly embedded in decision-making across both public and private sectors, the urgency for robust governance frameworks that safeguard the core values of democratic societies has never been greater. In response, the Committee on Artificial Intelligence (CAI) of the Council of Europe adopted the HUDERIA Methodology in late 2024 – a pioneering approach to assessing AI’s impact through a human rights lens.
A human rights-based AI assessment tool
HUDERIA, which stands for Human Rights, Democracy, and Rule of Law Impact Assessment, offers structured, practical guidance to identify and manage the risks and impacts posed by AI systems.[1] Developed by the Ethics and Responsible Innovation team at the Alan Turing Institute, HUDERIA reflects years of close collaboration with the Council of Europe and its Member and Observer States.[2]
While HUDERIA is a non-binding guidance framework, it seeks to operationalize international human rights principles throughout the socio-technical lifecycle of AI systems.[3] It complements other risk assessment tools – such as the NIST AI Risk Management Framework and the EU AI Act’s fundamental rights assessments – by offering a distinctively normative and people-centred perspective grounded in democratic values.
Relationship to the Council of Europe Framework Convention on AI and human rights
HUDERIA was developed alongside the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law (“the Framework Convention”).[4] Both initiatives share a commitment to embedding human rights and rule of law principles into AI governance.[5] However, HUDERIA is a standalone, voluntary guidance tool – not an official interpretive aid for the Framework Convention.[6]
Instead, Parties to the Convention may choose to adopt HUDERIA – either fully or partially – as a flexible model to implement their risk and impact management obligations under the Convention, especially those outlined in Chapter V.[7] This allows States to adapt HUDERIA in line with their national laws, provided they meet the Convention’s baseline requirements for risk assessment.
Adopted by the CAI on 17 May 2024 and opened for signature on 5 September 2024, the Framework Convention already counts among its early signatories the EU, the United States, the United Kingdom, Israel, Norway, and several Council of Europe Member States.[8] It remains open for accession by Council of Europe Members and non-member States involved in its drafting, with further accessions possible subject to unanimous approval by existing Parties.
The four core elements of the HUDERIA Methodology
HUDERIA’s guidance is structured around four key methodological components:[9]
- I. COBRA – Context-Based Risk Analysis
Maps the AI system’s broader socio-technical environment to identify initial risks.
- II. SEP – Stakeholder Engagement Process
Encourages inclusive dialogue with those affected by the AI system, uncovering hidden harms and potential mitigation strategies.
- III. RIA – Risk and Impact Assessment
Provides a detailed process for evaluating the likelihood and severity of risks to human rights, democracy, and the rule of law.
- IV. MP – Mitigation Plan
Recommends responsive actions, including access to remedies and ongoing review mechanisms.
A defining feature of HUDERIA is its adaptability. Rather than a rigid checklist, it is a flexible framework that can be tailored to different sectors, organizational capacities, and risk levels. Systems with minimal risk may undergo only a lighter assessment, while those posing higher risks warrant more comprehensive governance measures.
Broader context: emerging normative frameworks for AI governance
As the Council of Europe’s Framework Convention gains traction, attention is also turning to other normative frameworks that could shape global AI governance. A prominent example is the concept of an International AI Bill of Human Rights, which aims to articulate specific rights and safeguards in the AI era.
This was a key topic at a recent academic consultation[10] hosted by the Bonavero Institute of Human Rights and the Institute for Ethics in AI in March 2025. Experts debated whether new AI-specific rights are necessary or if existing human rights frameworks suffice when properly adapted. Proposed rights discussed included: the right to access AI; the right to human decision-making; the right to control personal data; and prohibitions on impersonation, manipulation, and bias.[11]
Implementation and refinement ahead
With the methodology now formally endorsed, the focus is now on its implementation and refinement. In 2025, a new, more detailed HUDERIA Model will be developed and piloted,[12] complementing the current one.[13] This next step aims to translate the high-level guidance into concrete, adaptable tools and practices, enabling policymakers, developers, and regulators to more effectively assess and mitigate the risks AI systems pose to human rights, democracy, and the rule of law.
As HUDERIA evolves, it holds strong potential to become a cornerstone framework for human-centric and accountable AI governance across Europe and beyond.
[1] Council of Europe. Methodology for the Risk and Impact Assessment of Artificial Intelligence Systems from the Point of View of Human Rights, Democracy and the Rule of Law (HUDERIA Methodology). CAI(2024)16rev2, Committee on Artificial Intelligence (CAI), 28 Nov. 2024, Strasbourg. Council of Europe, 2024. www.coe.int/cai, hereinafter: HUDERIA Methodology.
[2] Spindlow, Sam. “Council of Europe Adopts Turing-Developed Human Rights Risk and Impact Assessment for AI Systems.” The Alan Turing Institute, 5 Dec. 2024, www.turing.ac.uk/news/council-europe-adopts-turing-developed-human-rights-risk-and-impact-assessment-ai-systems.
[3] HUDERIA Methodology.
[4] Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Council of Europe Treaty Series, no. 225, Council of Europe, 5 Sept. 2024, https://rm.coe.int/1680afae3c.
[5] Babická, Karolína, and Cristina Giacomin. “Understanding the Scope of the Council of Europe Framework Convention on AI.” Opinio Juris, 5 Nov. 2024, http://opiniojuris.org/2024/11/05/understanding-the-scope-of-the-council-of-europe-framework-convention-on-ai/.
[6] HUDERIA Methodology.
[7] Ibid.
[8] Council of Europe. Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Council of Europe Treaty Series No. 225, 5 Sept. 2024, Vilnius. Council of Europe, 2024. https://rm.coe.int/1680afae67.
[9] HUDERIA Methodology.
[10] “The 2024 Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law: Promises and Shortcomings, University of Oxford, 5 March 2025.” EU Law Live, 6 Feb. 2025, https://eulawlive.com/event/the-2024-council-of-europe-framework-convention-on-artificial-intelligence-and-human-rights-democracy-and-the-rule-of-law-promises-and-shortcomings-university-of-oxford-5-march-2025/.
[11] Ksiazek, Konrad. “The Need for, and Feasibility of, an International AI Bill of Human Rights.” Oxford Institute for Ethics in AI, University of Oxford, https://www.oxford-aiethics.ox.ac.uk/need-and-feasibility-international-ai-bill-human-rights. Accessed 16 May 2025.
[12] Spindlow, 2024.
[13] “HUDERIA: New Tool to Assess the Impact of AI Systems on Human Rights.” Council of Europe, 2 Dec. 2024, https://www.coe.int/en/web/portal/-/huderia-new-tool-to-assess-the-impact-of-ai-systems-on-human-rights.