Building an AI Information Security Management System Aligned with NCSI's Second Edition
Compliance teams in Omani ministries, regulators, and state-owned enterprises now have to answer a question that did not exist two years ago: how does our Information Security Management System (ISMS) actually accommodate an AI workload. The honest answer is that a stock ISO 27001 ISMS misses several AI-specific surfaces, and that NCSI's second-edition guidance has tightened expectations. This article maps the practical alignment between ISO 42001, ISO 27001:2022, and the NCSI second-edition controls, then hands over an evidence template and a 90-day plan a sovereign-grade institution can run. It pairs with our pillar on on-premise AI for sovereign institutions in Oman and the GCC, which sits one architectural layer below the ISMS work.
ISO 42001, ISO 27001, and NCSI second edition in one diagram
Three frameworks intersect cleanly when you read them in the right order. ISO/IEC 42001:2023 is the AI management system standard, designed as a lifecycle-based extension that sits on top of an existing security ISMS. ISO/IEC 27001:2022 is the underlying ISMS, with Annex A controls clustered into organisational, people, physical, and technological themes. NCSI's second-edition guidance is the Omani national overlay: it imports the 27001 spine, adds the National Data Classification Policy as a custody overlay, and references ISO 42001 plus the NIST AI Risk Management Framework as recognised AI reference frameworks. The mental model is simple: 27001 is the chassis, 42001 is the AI-specific powertrain, NCSI is the local plate. None of the three replaces the others.
The twelve control families that map cleanly
Twelve control families do most of the alignment work. Drawn from ISO 27001 Annex A and ISO 42001 controls, mapped against the NCSI second-edition pillars, they cover roughly 80 percent of what a sovereign auditor will check.
- Information security policies and AI policy. Top-level policies covering acceptable AI use, model approval, and human oversight (27001 A.5.1, 42001 5.2).
- Roles, responsibilities, and AI accountability. A named AI risk owner, model owner, and data steward (27001 A.5.3, 42001 5.3).
- Risk assessment and AI impact assessment. An AI-specific impact assessment per use case, in addition to the general risk register (42001 6.1, NCSI sectoral overlay).
- Asset and model inventory. Every model, every dataset, every embedding store treated as an asset of record (27001 A.5.9, 42001 7.4).
- Data classification overlay. National Data Classification tiers applied to training data, prompts, and outputs.
- Access control and prompt-level authorisation. Identity-bound access to the model and to retrieval indexes (27001 A.5.15, 27001 A.8.3).
- Cryptography and key custody. Encryption of weights at rest, key custody on institutional hardware (27001 A.8.24).
- Operations security and inference logging. Tamper-evident logs of every prompt and response routed through controlled channels (27001 A.8.15).
- Supply chain and model provenance. Verified offline source for weights and training data lineage (27001 A.5.19, 42001 8.3).
- Secure development for AI. Versioned model pipelines, signed releases, test gates (27001 A.8.25, 42001 8.4).
- Incident response and OCERT liaison. AI-specific incident playbooks, named OCERT contact, response time commitments (27001 A.5.24).
- Continuity, drift, and bias review. Quarterly drift, bias, and red-team review with documented remediation (42001 9.1, NCSI sectoral overlay).
AI-specific extensions: lineage, prompt injection, jailbreak, training-data provenance
Four extensions sit outside what a stock 27001 ISMS knows about, and they are where most institutional gaps live today.
- Model lineage. A signed manifest tracing every deployed model back to its base weights, fine-tuning datasets (with classification), training compute, and the engineer who promoted the release. Treat the manifest as an audit artefact retained for the same period as the data classification tier requires.
- Prompt injection resilience. Documented test cases covering direct, indirect, and tool-use injection. Reference taxonomy: the OWASP Top 10 for LLM Applications. Quarterly red-team report attaches to the management review.
- Jailbreak resilience. Standardised refusal taxonomy and an evaluation harness that tests for known jailbreak patterns plus institution-specific red lines (classified document references, protected categories, anti-state prompts). Failure rate is a tracked KPI.
- Training data provenance. For any fine-tuned or RAG-indexed corpus, a documented source list with copyright, classification, and consent basis under PDPL where applicable. Public-only training is the cleanest path. Mixed-source training requires a documented legal opinion on file.
Audit evidence templates
Six artefacts cover the evidence surface a sovereign auditor will request. Drafting them once, then maintaining them as living documents, takes the audit out of the critical path.
- AI Asset Register. CSV or markdown table with model name, base weights, classification of training data, deployment environment, owner, last review date.
- Model Bill of Materials. Signed JSON or YAML with weight hashes, framework versions, container digests, GPU driver versions, update channel.
- AI Impact Assessment. Per-use-case template with risk taxonomy, mitigation, residual risk, sign-off by AI risk owner and CISO.
- Inference Log Specimen. Sample audit log line per request, retention policy keyed to data classification tier.
- Red-Team Report. Quarterly findings on prompt injection, jailbreak, bias, with remediation tracker.
- Incident Runbook. AI-specific incident scenarios (model output leak, prompt log exposure, weight tampering) with named OCERT liaison and response timing.
First-90-day action plan
A practical sequence for an institution starting from a 27001-certified baseline that has not yet formally extended to AI.
- Days 1 to 14, scope and gap. Extend the ISMS scope statement to include the AI system. Conduct an ISO 42001 gap analysis using the twelve control families above. Output: prioritised gap list.
- Days 15 to 45, evidence drafting. Draft the six audit artefacts. Populate the AI Asset Register and the Model Bill of Materials for every model in production. Run the first AI impact assessment on the highest-classification use case.
- Days 46 to 75, controls hardening. Lock down inference logging, key custody, and verified offline update channels. Schedule the first red-team round and document refusal taxonomy. Brief OCERT liaison.
- Days 76 to 90, management review. First formal AI ISMS management review with the CISO, AI risk owner, and data protection officer. Approve the corrective actions log. Submit the updated ISMS scope and AI policy to internal audit. The institution is now ready for an external NCSI-aligned audit at the next cycle.
Mu'een, Oman's national shared AI platform, is in scope for the same controls when an institution consumes it for Public or lower-Restricted workloads. The classification overlay decides which platform serves which tier, and the AI ISMS spans both.
Hosn was built so that this evidence pack reads like an inventory of standard deliverables. Email [email protected] for a one-hour briefing in which we walk through the twelve control families against your specific institutional context, and leave you with a draft AI ISMS scope statement plus a populated 90-day action plan you can hand to your CISO.
Frequently asked
Do we need ISO 42001 if we already hold ISO 27001 certification?
ISO 27001 covers information security generally, but it does not address AI-specific risks like model drift, prompt injection, or training data lineage. ISO 42001 is the management-system standard for those concerns and is designed to bolt on top of an existing ISO 27001 ISMS rather than replace it. For sovereign Omani buyers, the practical answer is to extend the 27001 scope statement to include the AI system, then add the 42001 AI-specific controls as a delta. NCSI's second-edition baseline assumes an organisation already operates a 27001-style ISMS.
What is new in NCSI's second-edition guidance compared with the first edition?
The second edition tightens supply chain disclosure expectations, formalises the data classification overlay for processing systems (which now includes AI inference and embedding stores), strengthens incident reporting timelines aligned with OCERT, and references international AI governance standards including ISO 42001 and the NIST AI Risk Management Framework. The control families are unchanged in count but several have new sub-clauses for systems that learn or generate.
How does the NIST AI Risk Management Framework fit into this?
NIST AI RMF is voluntary US guidance, not Omani law, but it is the most mature open framework for AI-specific risk taxonomy. Mapping ISO 42001 controls onto NIST AI RMF functions (Govern, Map, Measure, Manage) makes the AI ISMS legible to international auditors and to multinational partners. NCSI's second-edition references it as a recognised reference framework, so citing the mapping in a tender response strengthens the compliance posture without adding obligations.
What audit evidence does an AI ISMS actually need to keep?
At minimum: a signed model bill of materials for every deployed model, a training data provenance statement (for fine-tuned models), an inference audit log retained per the data classification tier, a red-team test report covering prompt injection and jailbreak resilience, a quarterly drift and bias review, and a documented incident response runbook with named OCERT liaison. Each artefact should reference the specific ISO 27001 Annex A control and ISO 42001 clause it satisfies, so the auditor's mapping work is already done.