Building an AI Governance Function Inside an Omani Ministry

An Omani undersecretary has six concurrent AI initiatives in flight. A vendor pilot for citizen chat, an internal RAG over circulars, a fraud model on benefits payments, two procurement-driven analytics tools, and a translation layer from English correspondence into Arabic. None of them have a named owner inside the ministry. None have a documented decommission trigger. This article describes the smallest function that can fix that, and how to stand it up in 90 days.

1. Why a small ministry needs a 3-person unit, not a committee

Most Omani ministries respond to AI risk by forming a committee. A committee meets monthly, rates risk on a 1 to 5 scale, and adjourns. By the time a translation model starts drifting on legal terminology, the committee has not met in seven weeks. The harm is already in citizen-facing letters.

The alternative is an operating unit. Three named civil servants, each with a single job, sitting under the undersecretary or the digital transformation director. They own a register, a gate, and a dashboard. They can pause a system before lunch and document the reason after.

This shape comes directly from the GOVERN function of the NIST AI Risk Management Framework, which makes governance a continuous activity across all four functions (Govern, Map, Measure, Manage), not a quarterly review event. It also matches the management-system logic of ISO/IEC 42001, the first international AI management system standard, which expects defined roles and a Plan-Do-Check-Act cycle owned by people, not by a calendar invite.

Three people is not a budget statement, it is a span of control. With four or more, accountability dilutes. With two, holiday cover collapses. Three holds.

2. The three roles: data steward, model owner, ethics reviewer

The unit has one mission (every AI system used by the ministry is registered, monitored, and reversible) and three lenses on that mission:

  • Data steward. Owns the lineage of every dataset that touches a model: source, classification, retention, residency. Co-signs every procurement that involves citizen data. Reports to the Data Protection Officer for PDPL alignment.
  • Model owner. Owns the inventory of models in use (vendor SaaS, on-prem fine-tunes, RAG pipelines). Tracks performance, drift, and incidents. Holds the kill switch. Reports to the CIO or digital transformation director.
  • Ethics reviewer. Owns the impact assessment for new use cases, the appeals route for affected citizens, and the public-explainability artefacts (model cards, decision notices). Reports dotted-line to the Internal Audit function for independence.

None of these are full-time AI specialists on day one. They are existing senior staff (an enterprise data architect, a head of digital services, a legal or audit officer) given a written mandate, a one-page charter, and four hours per week ring-fenced. Specialist depth comes from external advisors and from the on-prem AI vendor, see on-premise AI for sovereign institutions for how vendor responsibilities flow into this map.

3. Lifecycle controls: procurement, deployment, monitoring, decommission

The unit applies four gates, mapped to the OECD AI Principles and to ISO 42001 clauses:

  1. Procurement gate. No AI capability enters the ministry without a one-page intake: business owner, data classes touched, residency, exit clause, model provenance. Vendor SaaS without a documented exit clause is rejected. This is where the unit catches the most damage early.
  2. Deployment gate. Before go-live, the ethics reviewer signs an impact assessment (purpose, affected populations, error modes, appeal route). The model owner registers the system in the inventory with a baseline metric and a drift threshold. The data steward signs off on training and inference data flows.
  3. Monitoring gate. Quarterly review per system: drift, incident log, complaints received, costs vs. forecast. Anything outside threshold escalates within 48 hours, not at the next quarterly meeting.
  4. Decommission gate. Every system has a written end-of-life trigger from day one (e.g. accuracy below X, vendor change of control, regulatory finding, loss of business owner). When triggered, models are archived, data is purged or returned, and citizens are notified where the system touched them.

This is the same lifecycle posture the EU is now codifying for high-risk public-sector AI under Article 26 of the EU AI Act (deployer obligations: human oversight, logging, post-market monitoring), with the August 2026 high-risk obligations as a useful external deadline pressure even for non-EU ministries that work with European counterparts.

4. The practical first 90 days

Days 1 to 30, charter. Issue a one-page mandate signed by the undersecretary. Name the three people. Block four hours per week per person. Publish an internal memo describing the unit, the gates, and how staff submit a new use case.

Days 31 to 60, inventory. The model owner walks every directorate, lists every AI system in use (including unsanctioned ChatGPT habits and vendor-bundled features). Each entry gets: owner, data classes, hosting, contract, accuracy claim, current status (sanctioned, tolerated, to-decommission). The unit will find systems nobody knew about, this is normal.

Days 61 to 90, gates live. Procurement, deployment, monitoring, decommission gates start running on the inventory. The first three impact assessments are completed (pick the highest-risk systems). One system is decommissioned to set precedent. Mu'een, Oman's national shared-AI platform, is added to the register as a sanctioned external dependency where the ministry uses it.

By day 90 the ministry has: a register, a gate, a dashboard, four documented impact assessments, one decommission, and a quarterly cadence. The function is small and unglamorous. It is also the difference between AI that serves the public and AI that surprises the minister at a press conference.

Briefing. If you are scoping the first 90 days for your ministry, write to [email protected] for a one-hour briefing. We come with the charter, the inventory template, and a ready-made set of impact-assessment forms in Arabic and English. By quotation.

Frequently asked

Why three people, not a steering committee?

Steering committees rate proposals quarterly and never own decisions. A three-person unit (data steward, model owner, ethics reviewer) owns the system register, the approval gate, and the monitoring dashboard. Committees can advise; only named owners can decommission a model on a Tuesday morning when drift triggers fire.

Do we need ISO 42001 certification on day one?

No. Use ISO 42001 as the structural map (policy, risk register, lifecycle controls, internal audit) and aim for first surveillance audit at month 18. The first 90 days should produce evidence artefacts that a future certifier can sample, not a polished certificate.

How does this interact with NCSI and the PDPL?

The AI governance unit sits under the existing CISO and DPO functions, not parallel to them. NCSI cybersecurity controls and PDPL data-subject rights remain mandatory; the AI unit adds model-specific controls (training-data lineage, drift, prompt-injection threat modelling) on top. One reporting line, three lenses.

What if the ministry only buys SaaS AI, never trains a model?

The role of model owner becomes deployer obligations under EU AI Act Article 26 logic: human oversight, logging, impact assessment, decommission rights. Procurement becomes the primary control point. The unit shrinks toward governance and audit, but it does not disappear; vendor-AI is still ministry-AI to the citizen.