In board risk committee meetings across regulated financial institutions, I encounter the same assumption with striking consistency: "We have ISO 27001. That covers AI, right?"

It does not. And the gap between what ISO 27001 covers and what ISO 42001 requires is not a technicality. It is a structural governance exposure that regulators are beginning to surface in examination — and that boards are accountable for whether or not they understand it.

Board Risk Signal

Institutions relying solely on ISO 27001 are securing the data — but not governing the AI.

Regulators examining AI decision systems increasingly expect documentation aligned with ISO 42001 principles: model inventories, validation records, bias assessments, explainability posture, and accountable ownership. ISO 27001 addresses none of these.

The governance gap is no longer theoretical. It is becoming visible in examinations — and the remediation burden falls on the institution.

This article is written for CISOs, CROs, board risk committee members, and anyone who advises them. It is not a technical standards comparison. It is a governance briefing: what each standard actually mandates, where the gap lives, and what your institution needs to do differently if AI is operating in production.

Author Credential

Rehan Kausar is one of fewer than 10 dual ISO 42001 and ISO 27001 Lead Auditors globally. This analysis is drawn from direct assessment experience across regulated financial institutions under Fed, OCC, and NCUA oversight.

What ISO 27001 Actually Covers — And What It Doesn't

ISO 27001 is an information security management system standard. Its mandate is the confidentiality, integrity, and availability of information. It asks: is the data protected?

This is a well-designed, rigorous standard. A properly implemented ISO 27001 program establishes strong controls over how information is stored, accessed, transmitted, and protected from breach. It covers access management, encryption, incident response, supplier security, business continuity, and physical security.

What it does not cover — by design, because it was never intended to — is how AI systems make decisions. It does not ask whether a model is fair. It does not ask whether an output is explainable to a regulator. It does not ask whether an AI system has drifted from its original performance baseline. It does not establish a methodology for documenting the purpose, training data, validation status, or accountability structure of an AI model.

ISO 27001 protects the data flowing through AI systems. It says nothing about whether those systems can be trusted, validated, or examined.

ISO 27001

Information Security Management

  • Data confidentiality, integrity, availability
  • Access controls and identity management
  • Encryption and transmission security
  • Incident response and breach management
  • Supplier and third-party security
  • Business continuity for information systems
  • Physical and environmental security
ISO 42001

AI Management System

  • AI system inventory and classification
  • Model purpose, intended use, risk profile
  • Training data governance and provenance
  • Validation, testing, performance monitoring
  • Bias assessment and fairness controls
  • Explainability and human oversight requirements
  • AI lifecycle management and retirement

What ISO 42001 Actually Requires

ISO 42001 — published by the International Organization for Standardization in 2023 — is the first international standard specifically designed for AI management systems. Its mandate is fundamentally different from 27001: it asks not whether your data is secure, but whether your AI systems are responsibly designed, deployed, and governed.

The standard establishes six core requirements that have no equivalent in ISO 27001:

AI system inventory and classification. Every AI system in production must be documented with its purpose, intended use, risk classification, and accountable owner. This directly maps to the Shadow AI problem — an institution cannot comply with 42001 without first knowing what AI is running.

Impact assessment. Before deployment, high-risk AI systems require a documented impact assessment covering potential harms to individuals, groups, and society. This is analogous to a Privacy Impact Assessment but scoped to AI-specific risks: discriminatory outcomes, error rates, accountability gaps.

Data governance for AI. ISO 42001 requires documented controls over training data — provenance, quality, representativeness, and bias screening. This is distinct from ISO 27001's data protection requirements. 27001 asks whether the data is secure. 42001 asks whether the data was appropriate to train on in the first place.

Validation and performance monitoring. AI systems must be validated before deployment and monitored in production for drift, degradation, and fairness. This mirrors the Fed's SR 11-7 requirements for model validation — and for financial institutions already subject to SR 11-7, ISO 42001 provides the international standards framework that strengthens the evidentiary posture of your existing model risk program.

Human oversight and intervention. The standard requires documented mechanisms for human review of AI outputs — particularly for consequential decisions. For financial institutions, this maps directly to adverse action requirements, fair lending oversight, and AML analyst review workflows.

Transparency and explainability. AI outputs that affect individuals must be explainable in terms those individuals can understand. For consumer-facing financial services AI — credit decisioning, account closures, fraud flags — this is both an ISO 42001 requirement and a regulatory expectation under existing fair lending and consumer protection frameworks.

"ISO 27001 tells you whether your AI system's data is protected. ISO 42001 tells you whether your AI system should be trusted. Boards need both. Most institutions only have one."

— Rehan Kausar, ISO 42001 & 27001 Dual Lead Auditor

The Governance Gap in Practice

The gap between these two standards becomes visible under examination. Consider three scenarios that regulators are actively encountering:

Scenario 1: The validated model with undocumented training data. An institution's fraud detection model passes ISO 27001 audit cleanly — the data it processes is encrypted, access is controlled, logging is complete. But under ISO 42001 scrutiny, the training data cannot be traced. The model was trained on historical transaction data that included a period during which a protected class was systematically underrepresented in the customer base. The model has never been tested for disparate impact. ISO 27001 saw no problem. ISO 42001 would have flagged it before deployment.

Scenario 2: The vendor-embedded model with no validation history. A core banking platform upgrade activated a credit risk scoring feature. The vendor provided SOC 2 certification and ISO 27001 compliance documentation. There is no ISO 42001 documentation — no impact assessment, no validation record, no drift monitoring configuration. From a regulatory examination perspective, this model is operating without a governance posture, regardless of the security certifications attached to it.

Scenario 3: The explainability gap in adverse action. A consumer lending AI flags an application for denial. The institution's adverse action notice cites standard regulatory categories. But the actual reason — the specific model features that produced the denial — cannot be articulated by anyone in the institution. ISO 27001 cannot help here. ISO 42001's explainability requirements, properly implemented, produce the documentation trail that supports defensible adverse action.

Regulatory Implication

Examiners reviewing AI in regulated financial institutions are not satisfied by information security certifications alone. The questions being asked in 2025–2026 examinations — about model inventories, validation histories, bias testing, and explainability — are ISO 42001 questions. Institutions presenting only ISO 27001 compliance leave the most consequential governance questions unanswered.

Side-by-Side: The Questions Each Standard Answers

AI Governance Architecture — Standard Coverage Map
Board AI Oversight Obligation
ISO 27001
Information Security Management
  • Data confidentiality & integrity
  • Access control & identity
  • Encryption & transmission
  • Incident response
  • Supplier security
ISO 42001
AI Management System
  • AI inventory & classification
  • Model validation & monitoring
  • Bias & fairness controls
  • Explainability requirements
  • Human oversight mechanisms
⚠ ISO 27001 alone leaves AI governance unaddressed ✓ Both standards required for full board-level coverage
Governance Question ISO 27001 ISO 42001
Is customer data encrypted and protected? ✓ Covered Partial
Do we know every AI system in production? Not addressed ✓ Required
Was this model validated before deployment? Not addressed ✓ Required
Has this model been tested for bias? Not addressed ✓ Required
Can we explain an AI decision to a regulator? Not addressed ✓ Required
Is there a human review process for consequential AI? Not addressed ✓ Required
Is our vendor's AI use properly assessed? Security only ✓ Full AI governance
Are we monitoring AI performance in production? Not addressed ✓ Required

Five Questions Every Board Should Now Be Asking

Boards are accountable for enterprise risk — including AI risk. The following questions, asked of management in a board risk committee context, will immediately surface whether your institution has a governance gap between its information security posture and its AI governance posture.

1
"Do we have a complete inventory of every AI system operating in production — including vendor-embedded and business-unit-deployed AI?"
A "yes" answer should be accompanied by a specific count, a risk classification, and an accountable owner for each system. A vague answer indicates a Shadow AI exposure.
2
"Which of our AI systems have been validated, and when was the most recent validation performed?"
Validation is distinct from testing. It requires documented evidence that a model performs as intended, across the population it serves, within acceptable error bounds.
3
"Have our AI systems that touch credit, fraud, or compliance decisions been tested for disparate impact?"
This is both a fair lending requirement and an ISO 42001 requirement. The absence of documented disparate impact testing in these domains represents compounding regulatory exposure.
4
"Can we explain, in plain language, how our highest-risk AI systems make their decisions — and would that explanation satisfy an examiner?"
Explainability is not just a consumer protection requirement. It is the foundational evidence that demonstrates your governance program has substance beyond documentation.
5
"Is our CISO's mandate scoped to include AI governance — or only information security?"
In most institutions, the CISO's charter was written before ISO 42001 existed. If AI governance has not been explicitly assigned — to the CISO, CAIO, CRO, or a dedicated function — it likely belongs to no one.

The Case for an Integrated Posture

The strongest AI governance posture is not ISO 27001 or ISO 42001 — it is both, implemented as an integrated management system.

In practice, a significant portion of ISO 42001's requirements depend on a functioning ISO 27001 foundation. You cannot govern AI data without data security controls. You cannot document model training data provenance without access logs and data lineage controls that a mature 27001 program provides. The standards are not redundant — they are complementary, covering adjacent surfaces of the same governance obligation.

The integrated approach also produces compounding credibility in examination. An institution that presents both certifications — or documented alignment to both standards — is making a statement about the depth of its governance investment that a single certification cannot convey. For institutions under multiple regulatory jurisdictions, dual certification is increasingly becoming the expected standard rather than a differentiator.

<10
Dual ISO 42001 & 27001 Lead Auditors Globally
2023
Year ISO 42001 Was Published
420+
AI Systems Governed Under Both Standards

What Boards and Leadership Teams Should Do Now

The practical path forward begins with a governance gap assessment — a structured review that maps your current AI governance controls against ISO 42001's requirements and identifies where your existing ISO 27001 program does and does not provide coverage.

This assessment typically produces three outputs: a gap inventory ranked by regulatory risk, a prioritized remediation roadmap, and a board-level summary that translates technical governance gaps into risk language appropriate for committee reporting.

For most institutions, the highest-priority gaps cluster in three areas: AI system inventory completeness, validation documentation for high-risk models, and explainability posture for consumer-facing AI decisions. These are also the areas where examiners are most actively looking.

The window to address these gaps on your own timeline — before an examination surfaces them — is narrower than it was two years ago. ISO 42001 is no longer a forward-looking standard. It is the framework against which regulators are increasingly benchmarking institutional AI governance today.

"The board question is not 'should we get ISO 42001 certified?' The board question is 'are we governing our AI the way ISO 42001 requires — and can we demonstrate that to an examiner?' Certification is the evidence. Governance is the obligation."

— Rehan Kausar

Is Your AI Governance Program ISO 42001 Ready?

A structured gap assessment maps your current controls against ISO 42001 requirements, identifies where your existing 27001 program provides coverage — and where it doesn't. Delivered in 2–4 weeks with a board-ready summary.

Related Articles