◆ XILIGENTFIELD NOTES·AI LIFECYCLE GOVERNANCE
Field Notes · Issue 12 · APR 26, 2026

The AI System Lifecycle Under ISO 42001: Stages and Gate Criteria

Seven stages, seven gates. A walk through ISO 42001's lifecycle approach — from concept and design through retirement — and the implementation patterns that produce better outcomes.

From the essay
The eventual goal is not lifecycle perfection. It is a working management system in which every AI system has a clear, current, defensible position in the lifecycle — and moving between stages requires deliberate, documented decision rather than drift.
◆ FIG. 01 — XILIGENT FIELD NOTES VOL. 12

One of the more useful — and more under-discussed — features of ISO 42001 is its emphasis on managing AI systems across their full lifecycle, not just at deployment. Most organisations new to AI governance focus their attention on the moment a model goes live. ISO 42001 forces attention much earlier and much later than that.

The standard's lifecycle thinking is captured primarily in Clause 8 (Operation) and Annex A control domain A.6 (AI system lifecycle). Together they define a structured progression from initial concept through retirement, with specific governance expectations at each stage. Properly implemented, this becomes the operational backbone of an AI Management System (AIMS).

This post walks through the lifecycle as ISO 42001 frames it — the stages, the gate criteria, the documentation expectations, and the practical implementation patterns we see working in 2026.


Why a Lifecycle Approach Matters

The temptation to govern AI only at deployment is understandable but expensive. Most of the decisions that shape an AI system's risk profile — what data it learns from, what objectives it optimises for, who it affects, what trade-offs it makes — are made long before the model is in production. Decisions made later are harder and more costly to reverse.

A lifecycle approach embeds governance at the points where decisions are actually made:

  • Concept and design — when the purpose of the system is defined
  • Data and development — when the substance of the system is built
  • Verification and validation — when fitness for purpose is tested
  • Deployment — when the system enters production
  • Operation and monitoring — when real-world behaviour unfolds
  • Change and retraining — when the system evolves
  • Retirement — when the system exits

ISO 42001 does not invent these stages — they are visible in any mature ML or product engineering practice. What the standard does is make them an explicit part of the management system, with documented gate criteria, decision rights, and accountability at each.


Stage 1: Concept and Design

The earliest stage — often the one most loosely governed — is where an AI system's purpose, intended use, intended users, and intended deployment context are defined. This is where the highest-leverage governance lives, because it is where ambiguity is cheapest to resolve.

Key questions at this stage:

  • What problem is this AI system solving?
  • Who is it for? Who is affected by it?
  • What outcomes are we optimising for? What trade-offs are acceptable?
  • What is the expected impact on individuals and on society?
  • What are the alternatives, including non-AI alternatives?

Required documentation:

  • An initial system specification, including intended purpose and scope
  • An initial AI impact assessment (per Clause 6.1.4) — what could go wrong for individuals, groups, and society if this system is built and deployed?
  • An initial risk assessment — what risks does this create for the organisation?
  • A go/no-go decision recorded by the appropriate governance forum

Gate criteria for moving to the next stage:

  • The intended purpose and scope are clearly defined
  • An initial impact assessment has been conducted and reviewed
  • Risks have been identified at a high level
  • Stakeholders are aware and the appropriate governance body has approved the project

This stage typically takes days to weeks. The cost of getting it wrong is the cost of building a system that should not have been built — which is significantly higher than the cost of delaying it briefly.


Stage 2: Data and Development

Once a system has been approved at the concept stage, attention moves to data and model development. This is where ISO 42001's data governance controls (Annex A.7) become operationally important.

Key questions at this stage:

  • What data are we using? Where did it come from?
  • Is the data appropriate for the intended purpose? Does it represent the deployment population?
  • What biases might the data contain? How are we addressing them?
  • What is our model architecture and why?
  • What objectives are we training against?

Required documentation:

  • Data provenance and sourcing records
  • Data quality assessments
  • Bias and representativeness analysis
  • Model design rationale
  • Training procedures and reproducibility records
  • Data and model versioning

Gate criteria for moving to the next stage:

  • Data sources are documented and demonstrably fit for purpose
  • Data governance controls are in operation
  • Model design is documented and traceable
  • Initial model artefacts are versioned and reproducible

This stage often takes weeks to months. Common failure modes include data with unclear provenance, training pipelines that cannot be reproduced, and inadequate documentation of design decisions.


Stage 3: Verification and Validation

Before a system is deployed, ISO 42001 expects it to be tested for fitness for purpose. This is broader than the conventional ML practice of accuracy testing — it includes safety, fairness, robustness, and alignment with intended use.

Key questions at this stage:

  • Does the system perform as intended on representative test data?
  • How does it perform across relevant subgroups? Is it fair?
  • How does it handle edge cases and adversarial inputs?
  • How does it behave under realistic deployment conditions?
  • Are the residual risks acceptable for the intended deployment?

Required documentation:

  • Test methodology and test data
  • Performance results, including subgroup analysis
  • Robustness and adversarial testing results
  • A validation report
  • An updated impact assessment incorporating actual system behaviour
  • A residual risk assessment

Gate criteria for moving to the next stage:

  • The system performs acceptably against documented criteria
  • Subgroup performance has been assessed
  • Residual risks are documented and accepted by the appropriate authority
  • The validation report is approved
  • Deployment readiness is confirmed by the governance committee

This is the stage where many AI projects struggle. The pressure to move to deployment is high; the discipline to wait for adequate validation is hard. ISO 42001 introduces explicit gate criteria precisely to make that discipline structural rather than dependent on individual judgement.


Stage 4: Deployment

Deployment is the moment the system moves from controlled environments into real-world use. ISO 42001 treats this as a distinct gated stage, not as an automatic consequence of validation.

Key questions at this stage:

  • Are deployment environments appropriately controlled?
  • Are users (deployers, end users, affected individuals) appropriately informed?
  • Are human oversight mechanisms in place?
  • Are monitoring and alerting in place to detect issues quickly?
  • Is incident response ready?

Required documentation:

  • Deployment plan and rollback procedures
  • User-facing documentation and transparency information
  • Human oversight design (how, when, by whom)
  • Monitoring plan
  • Initial operational risk assessment

Gate criteria for moving to operation:

  • Deployment plan approved
  • Monitoring infrastructure operational and tested
  • Human oversight roles assigned and trained
  • User-facing transparency information published
  • Incident response runbooks reviewed and ready
  • Deployment authorised by the appropriate governance forum

Phased rollouts (canary deployments, limited initial populations) are not required by the standard but are increasingly expected as best practice for higher-risk systems.


Stage 5: Operation and Monitoring

Once deployed, the system enters its longest lifecycle stage — sometimes years. ISO 42001 expects continued, structured attention here, not the "deploy and forget" pattern that earlier ML practice often defaulted to.

Key questions at this stage:

  • Is the system performing as expected against monitored metrics?
  • Is performance drifting? In which directions and for which subgroups?
  • Are users encountering problems? What are they reporting?
  • Are there any operational incidents that require investigation?
  • Are external conditions changing in ways that affect the system?

Required documentation:

  • Ongoing performance monitoring records
  • Drift detection and analysis
  • User feedback and complaint logs
  • Incident records and investigations
  • Periodic review reports (typically quarterly for high-impact systems)

Operational controls in this stage:

  • Continuous or periodic performance monitoring
  • Subgroup performance tracking
  • Drift detection
  • Human oversight as designed
  • Incident response capability
  • Periodic review by the AI governance committee

This is the stage where most organisations have the largest gap between policy and practice in 2026. Monitoring exists, but the link from detection to investigation to action is often weak. Closing that loop is the highest-leverage operational improvement most AIMS implementations need.


Stage 6: Change Management and Retraining

AI systems change. They are retrained, fine-tuned, updated with new data, integrated with new components, and redeployed. ISO 42001 treats material changes as triggering re-evaluation through the lifecycle gates.

Key questions at this stage:

  • What is the proposed change? Why?
  • Does it materially alter the system's behaviour, risk profile, or intended use?
  • Does it require re-validation? Does it require an updated impact assessment?
  • What are the deployment risks of the changed system?

Required documentation:

  • Change description and justification
  • Impact analysis (does this change require revalidation?)
  • Updated validation if required
  • Updated impact assessment if material
  • Approval through the appropriate governance forum

Practical pattern that works:

A change taxonomy that distinguishes between minor updates (bug fixes, infrastructure changes, parameter tuning within bounds) and material updates (retraining on new data, architectural changes, scope expansion). Minor updates use lightweight review; material updates trigger a return to relevant earlier lifecycle stages.

Without this taxonomy, organisations either over-govern (treating every code commit as requiring full re-validation) or under-govern (allowing material changes to ship without review). The standard expects appropriate proportionality.


Stage 7: Retirement

The final stage — and the one most often forgotten — is the structured retirement of an AI system. ISO 42001 explicitly addresses end-of-life through Annex A controls, including information deletion (A.7.5 in the data domain).

Key questions at this stage:

  • Why is the system being retired? Replaced? Decommissioned? End of life?
  • What dependencies does it have? Who is using it?
  • What data must be retained, archived, or deleted?
  • What documentation must be preserved for regulatory or audit purposes?
  • What communication is required to affected stakeholders?

Required documentation:

  • Retirement plan
  • Data handling decisions (retention, archival, deletion)
  • Stakeholder communications
  • Final operational records
  • Confirmation that retirement has been completed

Gate criteria for completion:

  • All in-scope data has been handled per the plan
  • Dependent systems and users have been notified and transitioned
  • Records have been retained per the retention schedule
  • The system is verifiably no longer in operation
  • Retirement is confirmed by the governance committee

The most common failure mode at this stage is partial retirement — the system is "switched off" but not truly retired, leaving data, dependencies, and access pathways in place for years afterwards. ISO 42001's explicit retirement requirements push organisations to do this properly.


Implementation Patterns That Work

Across organisations implementing ISO 42001 lifecycle controls in 2026, a few patterns consistently produce better outcomes:

A defined intake process for new AI initiatives. Every proposed AI system enters the lifecycle through a single structured intake — typically a one-page proposal reviewed by the AI governance committee. This produces a defensible record from day one and prevents the "it just appeared in production" pattern.

A central AI inventory tied to lifecycle stage. Each AI system in the inventory carries a current lifecycle stage, the date of last governance review, and the responsible owner. This is the fastest way to surface systems that have stalled in a stage or skipped a gate.

Governance committee with monthly cadence. Lifecycle decisions concentrate at the committee. Monthly meetings produce sufficient pace; weekly is overkill; quarterly is too slow.

Proportionality. Not every AI system requires full lifecycle ceremony. A risk-tiered approach — applying full discipline to high-impact systems and lighter governance to low-risk ones — is what makes the system sustainable. The standard explicitly supports this through risk-based application.

Tooling that links inventory, evidence, and decisions. A GRC platform that ties the AI system inventory to the lifecycle stage and the supporting evidence makes audit preparation a query rather than a project.


A Closing Note

ISO 42001's lifecycle approach is one of the standard's most valuable contributions to the wider AI governance conversation. By embedding governance at concept, data, validation, deployment, operation, change, and retirement — rather than concentrating it at deployment — it produces a structurally more resilient way of doing AI.

For organisations new to formal AI governance, the practical first step is not to implement all seven stages immediately. It is to map your current AI portfolio against the lifecycle, identify which stages are well-governed and which are weak, and prioritise the gaps. Most organisations have decent design and deployment governance and weak data, change, and retirement governance. Strengthening those three is usually the highest-impact early investment.

The eventual goal is not lifecycle perfection. It is a working management system in which every AI system in the organisation has a clear, current, defensible position in the lifecycle — and in which moving from one stage to the next requires deliberate, documented decision rather than drift.


The single most useful exercise for organisations starting on this is to take three of their highest-impact AI systems and ask, for each, "what stage is this in, what documentation supports that, and what would it take to pass through the next gate?" The answers reveal the entire AIMS implementation pattern.

Field Notes · Weekly

Long-form privacy & GRC essays in your inbox. One per Tuesday. No filler.

Free. Unsubscribe in one click. We don't have a cookie banner.

© Xiligent 2026 · All rights reservedField Notes · Issue 12 · APR 2026