Skip to main content

Navigating the NIST AI Risk Management Framework

Jason J. Boderebe
9 min read
#AI #Risk Management #NIST #Cybersecurity

Introduction: Why AI Risk Management Matters

Artificial intelligence is transforming industries and daily life, from smart assistants to autonomous vehicles. But alongside AI’s benefits come significant risks – biased decision-making, threats to privacy, unsafe behaviors, and other unintended harms [1]. Recent advances (for example, the emergence of powerful yet flawed generative AI models like ChatGPT) have highlighted how crucial it is to manage AI’s risks [2]. Without proper oversight, AI systems could erode civil liberties or even cause real-world damage. As a security practitioner, I’ve seen first-hand that AI risk management isn’t just a checkbox or compliance exercise – it’s essential for ensuring these technologies remain trustworthy and aligned with our values.

Governments and organizations worldwide are now racing to cultivate trust in AI through guidelines, standards, and frameworks for responsible AI development. One major step in this direction is the NIST AI Risk Management Framework (AI RMF). Published by the U.S. National Institute of Standards and Technology in January 2023, the NIST AI RMF v1.0 provides a structured, voluntary approach for organizations to address AI risks and promote trustworthy AI [3]. In this two-part blog series, I’ll break down what the NIST AI RMF is, its core components, and how you can implement it in practice. In Part 1, we’ll cover why the framework is useful, who it’s for, and dive into the first two of its four core functions (Govern and Map). Part 2 will cover the remaining functions (Measure and Manage), practical steps to adopt the framework, its limitations, and a handy AI RMF checklist.

What is the NIST AI RMF (v1.0)?

The NIST AI RMF is the U.S. federal government’s first comprehensive guidance for identifying and managing the risks of artificial intelligence systems [3]. Importantly, it’s not a regulation but a set of guidelines – there are no legal penalties or compliance mandates attached. Instead, the framework is intended for voluntary use by organizations of all types to improve the trustworthiness of their AI systems [3]. Whether you’re a tech giant, a startup, a government agency, a research lab, or any team building or using AI, the AI RMF offers a common approach to manage AI risks. In NIST’s terms, it’s for all “AI actors” across the AI lifecycle – developers, deployers, users, evaluators, etc., in any sector or domain [4] [5]. The framework is industry- and technology-neutral, meaning you can apply it to anything from a simple machine learning model to a complex autonomous system.

At a high level, the AI RMF’s goal is to help organizations “incorporate trustworthiness considerations into the design, development, use, and evaluation” of AI products, services, and systems [6] [1]. In other words, it provides a roadmap to build AI systems that people can trust by proactively addressing issues like safety, fairness, privacy, security, and transparency. The AI RMF was developed through a consensus-driven process with input from industry, academia, civil society, and government experts, so it reflects a broad range of best practices and concerns. Notably, it aligns with global AI governance efforts – for example, it complements the OECD AI Principles and is conceptually in tune with the EU’s proposed AI Act – helping to establish a baseline for managing AI risk worldwide [7].

How is it structured? The NIST AI RMF is modeled loosely after NIST’s popular Cybersecurity Framework. It is organized into a set of four core functions that together cover the AI risk management process: Govern, Map, Measure, and Manage [8]. Each function is further broken down into categories and sub-categories (which are basically specific outcomes or objectives to achieve). The framework is meant to be flexible – organizations can tailor it to their needs, focusing on the functions or categories most relevant to their context and risk profile [8] [9]. There’s also the concept of AI RMF “Profiles,” which lets an organization map the framework to its particular use cases and maturity level (similar to how profiles are used in other NIST frameworks). Profiles are optional and can be developed over time to track your progress or target state.

To support practical adoption, NIST released a companion AI RMF Playbook alongside the framework [8]. The Playbook is an interactive resource with suggested actions, techniques, and references to help achieve each outcome in the framework [10]. In short, the AI RMF doesn’t just drop high-level principles on you – it connects them to concrete practices. NIST also launched a Trustworthy AI Resource Center with tools and shareable resources to aid implementation. We’ll touch on some of these in Part 2 when we get into operationalizing the framework.

Before diving into the specific functions, one more thing to note: NIST emphasizes that AI risk management should be continuous and iterative. You don’t just do a one-time risk assessment and call it a day – you cycle through these core functions regularly as your AI system is developed, deployed, and monitored [11]. And while the four functions are listed in an order (Govern, Map, Measure, Manage), they are highly interrelated rather than strictly sequential. In fact, “Govern” is a cross-cutting function that informs all the others [12]. You can think of Govern as the hub in the middle of the wheel, with Map, Measure, and Manage as the surrounding spokes – as illustrated in NIST’s diagram below.

NIST AI RMF Functions Diagram

With that overview in mind, let’s explore the first two core functions of the AI RMF in detail: Govern and Map.

Govern: Establish a Risk Management Foundation

The Govern function is all about establishing the foundational governance practices and culture within your organization to manage AI risks. Essentially, this is where leadership sets the tone and the internal processes for responsible AI are put in place. It involves creating the policies, roles, and oversight mechanisms needed to ensure AI development and deployment are done responsibly [13]. Under Govern, organizations should cultivate a culture of risk management – meaning top management supports it, there are clear accountabilities, and everyone from developers to executives stays aware of AI risks and their role in managing them. This often includes activities like training staff on AI ethics and security, ensuring teams working on AI are multidisciplinary (not just engineers, but also include domain experts, ethicists, etc.), and aligning AI risk management with the organization’s broader mission, values, and legal obligations.

In short, Govern sets the stage for everything else. If you don’t have strong governance and an organizational commitment to AI risk management, then all the technical fixes in the world won’t guarantee trustworthy AI. (Even the framework explicitly notes that AI systems are “inherently socio-technical” – effective risk mitigation requires human and organizational controls as much as technical ones [14].) For example, you might have the best bias-detection tool (a technical measure), but if your leadership doesn’t prioritize fixing bias issues or your team isn’t trained to interpret and act on those findings, the tool won’t help. Thus, the Govern function covers things like establishing an AI risk oversight committee, defining an AI risk policy, setting ethical AI principles, and making sure there’s support and awareness at all levels of the organization. It’s the organizational glue that holds the whole AI risk program together.

Map: Identify AI Risks in Context

The Map function focuses on contextualizing and identifying the risks of a given AI system. Here, you and your team “map” out the landscape in which the AI will operate and pinpoint what could go wrong. This starts with clearly defining the AI system in question – its purpose, scope, and objectives – and then identifying who and what might be impacted by it, both positively and negatively [15]. Crucially, mapping isn’t limited to direct users; it includes end-users, customers, potentially affected communities, and society at large. You want to ask questions like: What is this AI system intended to do? What data does it use? What decisions or outputs does it generate? And then: What could go wrong or produce harm if it fails or is misused? Could it produce biased or unfair outcomes? Could it invade someone’s privacy or threaten safety? Who would be affected in those scenarios, and how severely? By thoroughly assessing the context and potential impact, you can surface a comprehensive list of risks.

Under Map, teams are encouraged to think broadly and holistically. It’s not just about technical failure modes; it’s also about societal, ethical, and legal considerations [16]. For instance, an AI used in hiring might technically perform well in selecting candidates, but mapping should also consider if it could inadvertently discriminate against certain groups (ethical risk) or violate employment laws (legal risk). In fact, one of the key subcategories in the NIST framework (Map 1.1) explicitly calls for analyzing the context of use and documenting the potential positive and negative impacts on individuals, communities, organizations, society, and the planet [16] – a reminder that AI’s effects can scale widely. Map also involves identifying any assumptions and limitations of the AI system (e.g. “this face recognition model may not work well on certain demographic groups”) so that those are known upfront.

The outcome of the Map function is essentially a risk register or at least a prioritized list of what risks are on the radar for this AI system. In short, Map = Identify and understand the risks. This mapping sets the priorities for what to measure and how to manage the risks. It helps answer, “Of all the things that could go wrong, which are the most important or most likely, and where should we focus our attention?” Sometimes, Map may even lead you to conclude that a planned AI system is too risky to proceed with – or highlight areas where you need to gather more information. But generally, after mapping, you’ll move on to the next functions with a clear understanding of the risk landscape.

(To be continued…) In the next installment (Part 2), we’ll dive into the remaining two core functions –* Measure and Manage – and explore how organizations can operationalize** the AI RMF (including tools, techniques, and real-world tips). We’ll also discuss the framework’s limitations and provide a handy checklist to summarize how you can start applying the NIST AI RMF in your own AI projects. Stay tuned!