In conversations about AI governance in 2026, two names come up more than any others: ISO 42001 and the EU AI Act. They are often discussed in the same sentence, and just as often confused with each other.
This confusion has practical consequences. Compliance teams pursuing one and assuming it covers the other are setting themselves up for unpleasant discoveries. Procurement teams asking for "AI compliance" without specifying which regime are getting answers that may not match what they actually need. Founders trying to decide where to invest first are picking based on incomplete information.
The two are deeply complementary — one cannot fully replace the other — but they are fundamentally different in nature, scope, and obligation. This post lays out the differences clearly, explains how they fit together, and offers a practical view on what to do about both.
The Core Distinction in One Sentence
The simplest framing: the EU AI Act is a law that tells you what outcomes you must achieve. ISO 42001 is a standard that tells you how to organise yourself to achieve them.
Everything else in the comparison flows from that distinction.
What Each One Actually Is
The EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) is binding European legislation. It entered into force on 1 August 2024, with obligations phasing in through August 2027. Its purpose is to regulate the placing of AI systems on the EU market and the use of AI within the EU.
Key features:
- Legally binding within the EU and on entities placing AI on the EU market or whose AI outputs are used in the EU
- Risk-based, with four tiers: unacceptable risk (banned), high risk (heavily regulated), limited risk (transparency obligations), minimal risk (no obligations)
- Prescriptive, with specific requirements for risk management, data governance, technical documentation, human oversight, accuracy, and robustness
- Enforced by national competent authorities and the European AI Office, with substantial penalties (up to €35 million or 7% of global turnover for prohibited practices)
ISO 42001
ISO/IEC 42001:2023 is an international voluntary management system standard, published in December 2023. Its purpose is to provide organisations with a framework for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS).
Key features:
- Voluntary, applicable globally, used wherever organisations choose to align with it
- Process-based, focused on how the organisation governs AI rather than on specific outcomes for specific AI systems
- Principle-based, providing flexibility in how organisations meet requirements
- Certifiable by accredited third-party certification bodies, producing a recognised credential
- No penalties for non-compliance, though loss of certification is a meaningful consequence for organisations that hold it
The two were designed to complement each other. The Act tells regulated entities what they must achieve. The standard provides a structured operating model for achieving it.
Who Each One Applies To
The EU AI Act
The Act applies to:
- Providers of AI systems placed on the EU market or whose output is used in the EU, regardless of where the provider is located
- Deployers (users) of AI systems within the EU
- Importers and distributors of AI systems within the EU
Crucially, it has extraterritorial reach. An Indian SaaS company providing an AI-powered tool to EU customers is in scope of the Act, even if it has no physical presence in the EU.
The Act's risk-based structure means most obligations attach only to AI systems classified as high-risk under Annex III (biometric ID, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). For AI systems in non-regulated categories, obligations are limited to transparency and AI literacy.
ISO 42001
ISO 42001 applies to any organisation that chooses to adopt it — globally, with no geographic or sectoral limitations. The standard explicitly recognises three categories of organisation:
- AI providers (those placing AI systems on the market)
- AI producers (those building AI systems)
- AI users (those deploying AI in operations)
There are no thresholds, no risk classifications, and no extraterritorial mechanics. Organisations adopt it for commercial, operational, or assurance reasons, not because law requires it.
What Each One Requires
The EU AI Act (high-risk obligations)
For high-risk AI systems, the Act imposes specific obligations including:
- A risk management system covering the lifecycle (Article 9)
- Data and data governance requirements, including quality and bias (Article 10)
- Technical documentation (Article 11)
- Record-keeping and logging (Article 12)
- Transparency to deployers (Article 13)
- Human oversight measures (Article 14)
- Accuracy, robustness, and cybersecurity (Article 15)
- Quality management systems for providers (Article 17)
- Conformity assessment before placing on the market
- CE marking and registration in the EU database
- Post-market monitoring (Article 72)
- Serious incident reporting
These are largely outcome obligations — what the AI system itself must do or what records must exist about it.
ISO 42001 (clauses and Annex A)
ISO 42001 imposes management system obligations across:
- Organisational context and AIMS scope (Clause 4)
- Leadership commitment and AI policy (Clause 5)
- AI risk assessment, risk treatment, and impact assessment (Clause 6)
- Resources, competence, awareness, communication (Clause 7)
- Operational planning, lifecycle controls, and impact assessment in practice (Clause 8)
- Performance evaluation and internal audit (Clause 9)
- Continual improvement (Clause 10)
- Implementation of selected controls from Annex A (covering AI policy, organisation, resources, impact assessment, lifecycle, data, transparency, use, and supplier relationships)
These are largely process obligations — how the organisation must organise itself to make defensible decisions about AI.
The overlap is real. Several Act articles map cleanly onto ISO 42001 clauses and controls — risk management, data governance, documentation, human oversight, and post-market monitoring all have clear correspondences. Estimates of overlap range between 40% and 60%.
Where They Diverge
The differences are equally important to understand.
The Act has prohibitions; ISO 42001 does not. The EU AI Act explicitly bans certain AI practices (untargeted facial recognition scraping, social scoring, emotion recognition in workplaces and schools, certain categories of manipulation). ISO 42001 does not prohibit specific practices — it requires that the organisation has assessed and addressed risks, but does not draw bright lines on what AI must not do.
The Act has conformity assessment and CE marking; ISO 42001 does not. Some high-risk AI systems under the Act require formal conformity assessment by a notified body before market placement, and CE marking on the product. ISO 42001 produces an organisation-level certificate, not a per-system mark.
The Act regulates general-purpose AI models; ISO 42001 does not specifically. The Act has dedicated obligations for providers of general-purpose AI models, including transparency and copyright requirements. ISO 42001's coverage of GPAI risks comes through general risk and impact assessment processes rather than dedicated provisions.
The Act has serious incident reporting; ISO 42001 has internal incident management. The Act requires reporting of serious incidents to national authorities within specific timeframes. ISO 42001 requires the organisation to handle incidents through its management system but does not impose external reporting obligations on its own.
The Act applies to systems; ISO 42001 applies to organisations. This is the deepest difference. The Act asks: is this AI system compliant? ISO 42001 asks: is this organisation's approach to AI defensible?
How They Fit Together
The most accurate way to think about the relationship is:
ISO 42001 builds the management infrastructure that EU AI Act compliance requires.
A high-risk AI provider trying to comply with the Act needs:
- A risk management system (Article 9). ISO 42001 Clause 6 provides the methodology and Clause 8 provides the operational layer.
- Data governance (Article 10). ISO 42001 Annex A.7 provides the control framework.
- Technical documentation (Article 11). ISO 42001 Clauses 7 and 8 provide documentation discipline.
- Human oversight (Article 14). ISO 42001 Annex A.9 includes human oversight controls.
- Quality management (Article 17). ISO 42001 is a quality management framework for AI.
- Post-market monitoring (Article 72). ISO 42001 Clauses 9 and 10 establish performance evaluation and improvement.
ISO 42001 alone does not satisfy the EU AI Act. The Act has prescriptive obligations — conformity assessment, CE marking, registration, specific incident reporting — that the standard does not cover. But ISO 42001 makes EU AI Act compliance materially more achievable, more efficient, and more sustainable.
The harmonisation pathway makes this even clearer. Through the prEN ISO/IEC 42001 process, European standards bodies are adapting ISO 42001 into a European Norm. When that process completes and the standard is published in the EU's Official Journal, ISO 42001 certification will likely move from "useful preparation" to a presumption of conformity with parts of the AI Act. That is a material upgrade in legal weight.
Practical Decisions for Different Organisations
If you sell AI products into the EU
You need to address the EU AI Act regardless of anything else. Determine your risk classification under the Act. If you are high-risk, the Act's obligations are binding regardless of certification choices. Use ISO 42001 to build the management infrastructure that operationalises those obligations.
Order of operations: Implement ISO 42001 as the foundation; layer EU AI Act-specific requirements (conformity assessment, technical documentation per Article 11, post-market monitoring per Article 72) on top.
If you sell AI products globally but not into the EU
The EU AI Act is not binding, but its requirements are increasingly being treated as global best practice. Customers in the US, UK, Singapore, and elsewhere often look at AI Act-readiness as a proxy for governance maturity.
Order of operations: Pursue ISO 42001 certification as the primary governance investment. Track the AI Act's requirements as best practice. Watch UK, US state-level, and other regulatory developments that may bind you locally.
If you are an AI user, not a provider
If you deploy AI within the EU, you have deployer obligations under the Act — particularly around human oversight, monitoring, and (if you are a public body) fundamental rights impact assessments. ISO 42001 covers user organisations explicitly and helps operationalise these obligations.
Order of operations: Implement ISO 42001 with deployer-focused scope. Track the Act's deployer obligations specifically.
If you are an Indian organisation with no immediate EU exposure
The EU AI Act is not binding, but global procurement increasingly references it. ISO 42001 is the more direct investment for Indian customers, Indian regulators (DPDPA-related expectations), and global buyers outside the EU.
Order of operations: Pursue ISO 42001. Watch the AI Act's obligations as a future-proofing exercise.
A Side-by-Side Summary
| Dimension | EU AI Act | ISO 42001 |
|---|---|---|
| Type | Binding regulation | Voluntary standard |
| Geographic reach | EU + extraterritorial | Global |
| Approach | Prescriptive, outcome-based | Principle-based, process-based |
| Scope | Specific AI systems (risk-tiered) | Whole organisation |
| Certification | Conformity assessment for high-risk | Third-party AIMS certification |
| Penalties | Up to €35M or 7% turnover | None statutorily; loss of certificate |
| Prohibited practices | Yes | No |
| GPAI provisions | Yes (specific) | Through general risk processes |
| Incident reporting | To national authorities | Internal to management system |
| Status | In force, phased through 2027 | Published, mature, certifiable |
A Closing Note
The temptation to choose between ISO 42001 and the EU AI Act is misframed. They are not alternatives. One is a law you may have to comply with; the other is the operating model that makes that compliance — and broader AI governance — sustainable.
For organisations exposed to the EU AI Act, the practical path is to implement ISO 42001 first, layer Act-specific requirements on top, and track the harmonisation process that will eventually link them more formally. For organisations not exposed to the Act, ISO 42001 is the cleanest, most globally recognised investment in AI governance maturity.
The decisive year is 2026. The Act's high-risk obligations bind from 2 August 2026. Customers, investors, and boards are asking sharper questions. The organisations that have done the work by mid-2026 will be in materially better positions than those still scoping projects in early 2027.
The most useful clarifying conversation in most organisations is to separate the two questions: "what does the law require us to achieve?" and "how should we organise ourselves to achieve it?" Once these are separated, both ISO 42001 and the EU AI Act become tractable rather than overwhelming.