Model Risk
Also called: AI model risk, model risk management, MRM
Model risk is the potential for adverse outcomes — financial loss, regulatory action, reputational damage, customer harm — arising from errors in the development, implementation, or use of an AI/ML model. The discipline that manages this risk is called Model Risk Management (MRM).
MRM has deep roots in financial services, where regulators (Federal Reserve SR 11-7, OCC 2011-12, ECB TRIM) have required formal MRM programs for credit, market, and operational models since 2011. The arrival of generative AI has expanded MRM out of finance into every industry that deploys AI for decision support.
A typical model risk inventory documents:
- Model purpose — what decision it informs, how it's used downstream
- Materiality / risk tier — high (autonomous decisions on regulated outcomes), medium (decision support for humans), low (productivity tooling)
- Validation evidence — performance on training/holdout/production data, drift over time, fairness metrics, robustness tests
- Approved use cases and explicit out-of-scope scenarios
- Monitoring controls — what's being tracked, by whom, with what response triggers
- Owner — accountable individual, not a team or "the AI"
- Last review date and next scheduled review
Generative AI introduces new failure modes that traditional MRM frameworks weren't designed for: hallucination, prompt sensitivity, context-window degradation, training-data drift between API versions. Regulators are still catching up; expect significant 2026-2028 movement on GenAI-specific MRM guidance.
Why it matters
If your AI makes or shapes decisions that affect customers, employees, finances, or compliance posture, you need a model risk program — even if you don't call it that. Without it you can't answer "who approved this," "what could go wrong," or "how would we know if it broke." Those questions get harder, not easier, as you add models.
Related terms
AI Governance
AI governance is the set of policies, processes, roles, and controls an organization uses to develop, deploy, and operate AI systems responsibly and in compliance with applicable laws, standards, and stakeholder expectations.
NIST AI RMF
The NIST AI Risk Management Framework (AI RMF 1.
Prompt Injection
Prompt injection is an attack where adversarial text — placed in user input, retrieved documents, tool outputs, or other model context — overrides the model's intended instructions and causes it to perform actions or disclose information the developer did not authorize.
