NIST AI RMF
Also called: NIST AI Risk Management Framework, AI Risk Management Framework 1.0
The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary US framework for managing AI system risk across the lifecycle. It's organized around four core functions — Govern, Map, Measure, Manage — and a dedicated Generative AI Profile published in 2024.
Released by the US National Institute of Standards and Technology in January 2023 with a Generative AI Profile added in July 2024, the AI RMF is voluntary but increasingly cited as the de-facto US baseline by federal agencies, contracting officers, and state legislators. Its four functions:
- Govern — culture, accountability, policies, and oversight infrastructure
- Map — context characterization, intended use, AI system categorization, impact assessment
- Measure — quantitative and qualitative analysis of identified risks (validity, reliability, safety, security, fairness, accountability, transparency)
- Manage — risk prioritization, treatment, monitoring, and response
Unlike the EU AI Act, the NIST framework is not legally binding. But Executive Order 14110 (still partially in effect) directs federal agencies to apply it to their AI procurement and use, which creates a downstream obligation for any vendor selling AI to the federal government.
The Generative AI Profile addresses specific risks of foundation models: confabulation, data privacy, environmental impact, harmful bias, human-AI configuration, information integrity, intellectual property, obscene/degrading/abusive content, and value chain/component integration.
Why it matters
If you sell to US enterprises or government, NIST AI RMF alignment is becoming table stakes. Even private sector buyers are using it as a procurement filter. The framework is also a solid practical guide for organizations starting from zero — it doesn't tell you what to do, it tells you what to think about, which is more useful when your AI portfolio is still forming.
Related terms
EU AI Act
The EU AI Act (Regulation 2024/1689) is the European Union's binding legal framework for artificial intelligence systems.
ISO/IEC 42001
ISO/IEC 42001:2023 is the first international certifiable management-system standard for artificial intelligence.
AI Governance
AI governance is the set of policies, processes, roles, and controls an organization uses to develop, deploy, and operate AI systems responsibly and in compliance with applicable laws, standards, and stakeholder expectations.
