EU AI Act
Also called: European AI Act, AI Act regulation 2024/1689
The EU AI Act (Regulation 2024/1689) is the European Union's binding legal framework for artificial intelligence systems. It classifies AI systems by risk tier — unacceptable, high, limited, minimal — and imposes obligations on providers, deployers, importers, and distributors based on that classification.
Adopted in 2024 with a phased compliance timeline through 2027, the EU AI Act applies extraterritorially: any AI system that produces outputs used in the EU triggers obligations regardless of where the provider is headquartered. The Act introduces four risk tiers:
- Unacceptable risk — banned outright (social scoring, manipulative subliminal techniques, real-time biometric identification in public spaces with narrow exceptions)
- High risk — strict obligations including risk management systems, data governance, human oversight, post-market monitoring, conformity assessment, and CE marking. Covers AI in critical infrastructure, education, employment, essential services, law enforcement, migration, and democratic processes
- Limited risk — transparency obligations (chatbot disclosure, deepfake labeling, synthetic content marking)
- Minimal risk — voluntary codes of conduct
General-purpose AI (GPAI) models have a separate regime. Those with "systemic risk" (training compute above 10^25 FLOPs) face additional obligations including model evaluations, incident reporting, and adversarial testing.
Penalties scale with violation severity, up to €35M or 7% of global annual turnover (whichever is greater) for prohibited-use violations.
Why it matters
Practitioners need to do four things at minimum: (1) inventory every AI system in use, (2) classify each one against the risk tiers, (3) implement the obligations matching that classification, and (4) maintain documentation that survives audit. Most organizations are not yet compliant; the high-risk obligations alone can take 6-12 months to operationalize. Early movers benefit from cleaner audits and a head start on the GPAI obligations that activate later.
Related terms
NIST AI RMF
The NIST AI Risk Management Framework (AI RMF 1.
ISO/IEC 42001
ISO/IEC 42001:2023 is the first international certifiable management-system standard for artificial intelligence.
AI Governance
AI governance is the set of policies, processes, roles, and controls an organization uses to develop, deploy, and operate AI systems responsibly and in compliance with applicable laws, standards, and stakeholder expectations.
