Contacts
Members area
Close

Contacts

Registered office:

1065 Budapest, Bajcsy-Zsilinszky út 31. I/11.

info@ceuli.org

Mitigating Risks in Generative AI: A Guide to the NIST AI Risk Management Framework

drive-through-5036312_1280

by Mónika Mercz

The US National Institute of Standards and Technology (NIST) released NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile[1] on July 26, 2024. The document is a cross-sectoral profile of the AI Risk Management Framework (AI RMF 1.0) for Generative AI,[2] pursuant to President Biden’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.[3]

The Framework provides an overview of twelve risk categories unique to or exacerbated by generative artificial intelligence (GAI), including data privacy, environmental impact, intellectual property, human-AI configuration, and many more. These risks are juxtaposed against characteristics of trustworthy AI, such as accountability and transparency, enhanced privacy, safety, security, and resilience.

The U.S. Department of State released this document as a practical guide for organizations „to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.[4] A companion NIST AI RMF Playbook[5] has also been published along with an AI RMF Roadmap,[6] AI RMF Crosswalk,[7] and various Perspectives.[8]

The Framework includes a section of suggested actions to manage GAI risks, stressing the need to understand AI as much as possible.[9] To combat concerns around data privacy, harmful bias and homogenization, as well as intellectual property, the Framework suggests aligning GAI development and use with applicable laws and regulations, including those related to data privacy, copyright and intellectual property law. There is also a call to establish transparency policies and processes for documenting the origin and history of training data and generated data for GAI applications as a means to advance digital content transparency. Separate policies are encouraged too, to evaluate risk-relevant capabilities of GAI and robustness of safety measures.

The Framework recommends incorporating unique processes, such as setting minimum thresholds for performance or assurance criteria, which should be reviewed as part of deployment approval policies, procedures, and processes. Additionally, it highlights the importance of establishing a test plan and response policy before developing advanced models, which should then undergo periodic evaluation. It also emphasizes the necessity of implementing protocols to ensure that GAI systems can be deactivated when required.

Further points of the Framework include the suggestion to identify intended purposes for GAI systems by considering internal vs. external use, application scope, fine-tuning, and data sources; collaborating with experts to determine acceptable use contexts, considering assumptions, direct value, operational environment, potential impacts, and social norms; in addition to documenting risk measurement plans, including cognitive biases, past incidents, misuse, abuse, overreliance on quantitative metrics, standard measurement, and anticipated human-AI configurations.

There are altogether more than 200 suggested actions to mitigate the risks of GAI in the document, with three key functions for dealing with AI risks: mapping, measuring, and managing. But is NIST right about its unique approach to AI risks?

The thought process of categorizing AI tools based on their risks is definitely not unique to the Framework, as the EU’s AI Act also approaches the issue of AI by taking a look at the risks these systems might pose to human rights and classifying AI systems into four levels of risk.[10] However, NIST does not talk about all systems powered by AI, it only deals with generative AI, and goes into more specifics when it comes to risks, rather than trying to provide an overarching solution. Overall, both documents must be abstract by nature, as their central subject is expected to evolve and change substantially over time. The significant difference comes from their approach to legislation and utilization. Trustworthiness, however, is a vital part of both. This is hardly surprising, given that the single most important problem with today’s AI systems is that they are not safe enough to use without further reservations. Human oversight, legal ways to measure the safety of these exceedingly difficult systems, and transparency obligations are all ways that legislative efforts by several countries aim to approach a trustworthy AI.[11]

The most enlightening and hopeful idea about artificial intelligence is that it does not only cause harm and pose risks to human rights: it could also help solve global problems and mitigate the risks posed by other AI systems. It is expected that GAI will improve cybersecurity defenses,[12] by using threat intelligence analysis, by helping in fraud detection, and by processing all the data uploaded and created in a cloud environment to classify and tag it based on predefined policies.[13]

Ultimately, the race to regulate and utilize AI to the highest extent is well on its way. The Framework is an excellent step in the direction of ensuring that AI is transparent, trustworthy, and ultimately, suitable for safe use. With AI, our lives may change for the better, our work might be much faster and more precise, but the pitfalls should always be kept in mind.


[1] NIST Trustworthy and Responsible AI NIST AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, 2024. https://doi.org/10.6028/NIST.AI.600-1, hereinafter: NIST AI Risk Management Framework

[2] NIST AI Risk Management Framework, page 1. See also: AI Risk Management Framework (AI RMF 1.0) for Generative AI https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

[3]NIST AI Risk Management Framework, page 1. See also: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[4] https://www.state.gov/risk-management-profile-for-ai-and-human-rights/#fn1

[5] https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook

[6] https://www.nist.gov/itl/ai-risk-management-framework/roadmap-nist-artificial-intelligence-risk-management-framework-ai

[7] https://www.nist.gov/itl/ai-risk-management-framework/crosswalks-nist-artificial-intelligence-risk-management-framework

[8] https://www.nist.gov/itl/ai-risk-management-framework/perspectives-about-nist-artificial-intelligence-risk-management

[9] NIST AI Risk Management Framework, page 12.

[10] Regulation (EU) 2024/1689 of the European parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

[11] https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1.html

[12] Microsoft: GDPR & Generative AI – A Guide for the Public Sector, 2024. https://wwps.microsoft.com/wp-content/uploads/2024/04/GDPR-and-Generative-AI-A-Guide-for-the-Public-Sector-FINAL.pdf

[13] Dave Shackleford: AI in risk management: Top benefits and challenges explained, TechTarget, 2023. https://www.techtarget.com/searchsecurity/tip/The-benefits-of-using-AI-in-risk-management

Leave a Comment

Az e-mail címet nem tesszük közzé. A kötelező mezőket * karakterrel jelöltük