How Central Banks Can Use AI for Supervision Without Sharing Regulated Data

A central bank supervisor sits down on a Tuesday morning to thirty-plus prudential returns from the country's licensed banks, each containing the granular detail that defines the institution: obligor concentrations, liquidity buffers by counterparty, off-balance-sheet exposures, sectoral lending mix, and the running tally of operational-risk events. The supervisor is expected to spot the bank whose ratios drifted, draft the letter, and queue the on-site visit by Thursday. AI can compress the workload by an order of magnitude, but only if the regulated data never leaves the central bank's perimeter. The moment those returns flow into a public AI tool, the regulator has handed competitor-banks' inside information to a third country's commercial provider, and the supervisory file is no longer privileged.

This piece walks through where AI genuinely helps a central bank supervisor, why public AI tools are out of bounds, and what an on-premise architecture looks like for a Central Bank of Oman or SAMA-class regulator. The pillar treatment of the underlying legal framing sits in AI sovereignty under Omani PDPL; this article focuses specifically on the supervisor's seat.

The supervisor's data problem

A modern banking supervisor operates on a continuous stream of granular returns. The capital, liquidity, large-exposure, foreign-exchange, IFRS 9 staging, and operational-risk reports arrive monthly or quarterly from every licensed bank in the country. A small jurisdiction has thirty regulated banks, a regional one has more than a hundred, and the bytes per institution grow every reporting cycle. The supervisor's job is to read across all of them simultaneously, spot deviations from peer behaviour, and connect the dots to qualitative supervisory intelligence (board minutes, audit reports, examiner findings, complaints traffic).

The data the supervisor handles has three properties that public AI services cannot honour. First, every line item is competitively sensitive: a single bank's sector concentration, top-twenty-obligor list, or non-performing-loan migration is enough to move that bank's funding cost or trigger an information run. Second, the data is cross-bank: any model that ingests it sees a comparative position across the entire market that no individual bank's own AI tool ever should. Third, the supervisor's own working papers, draft letters, and internal disagreements are institutional-privilege material that a foreign-jurisdiction provider could be compelled to disclose. The combination is exactly the profile that CLOUD Act and China DSL exposure rules out from public-cloud processing.

Why public AI tools are off-limits

Three failure modes make public AI tools structurally incompatible with prudential supervision.

  • Cross-bank information leakage. A central bank that pastes Bank A's exposures into a public LLM has, in effect, briefed every future user of that provider's training surface on Bank A's positions. Even where a vendor offers a no-training tier, the trust assumption is that no engineer at the provider, no incident-response team, and no foreign authority with subpoena power ever sees the prompt content. That assumption is not survivable for prudential data.
  • Market-sensitive prompt content. Supervisory letters frequently quote unannounced rating actions, capital-raise expectations, and resolution planning. A leak of any of these prompts is itself a market event. Public AI services log, cache, and replicate prompts across availability zones by design. The supervisor cannot certify to the Governor that the prompt is unrecoverable.
  • Anonymisation does not survive prudential data. The combination of asset size, geography, sector mix, and reporting period uniquely identifies almost any regulated bank in almost any market. Strip the name and the model still knows which bank it is reading. The privacy literature has documented this re-identification risk for two decades; prudential data is the worst case for it.

The Financial Stability Board's 2024 stocktake on AI in financial services names third-party concentration, model risk, and data governance among the principal AI vulnerabilities authorities must address, and explicitly calls on national authorities to enhance their supervisory capabilities including by leveraging AI-powered tools. Building those tools on a public AI service would create the very third-party-concentration vulnerability the FSB tells supervisors to manage.

On-prem AI patterns for supervision

Three patterns fit the supervisor's seat and run cleanly on on-premise hardware.

Anomaly detection on regulatory returns. An engine ingests every prudential return as it arrives, computes peer-group statistics across the licensed-bank universe, and scores each filing on dozens of signals: capital-ratio drift, large-exposure concentration changes, sectoral-shift surprises, IFRS 9 stage-migration outliers, and inter-form inconsistencies (the form that says one number and the form that says another). The supervisor opens a ranked queue, not a stack of PDFs. The same approach applies to the supervisor's own audit copilot pattern described in our state-audit anomaly piece; the techniques transfer directly.

Narrative drafting from peer comparison. A retrieval-augmented LLM grounded in the supervisor's own manual, prior letters, and the peer-group statistics produces a first draft of the supervisory letter: "Bank X's CET1 ratio fell 80 basis points against a peer-group median move of negative 15, driven primarily by a 23 per cent increase in risk-weighted assets in commercial real estate; the supervisor expects the bank to confirm whether...". The supervisor edits, signs, and sends. The model never authors a final letter. It compresses the drafting time so the supervisor can spend the saved hours on the on-site visit.

Complaint triage and consumer-protection signal extraction. Most central banks operate a consumer-complaints function whose volume now exceeds what a small policy team can read fully. An on-premise classifier reads incoming complaints in Arabic and English, clusters them by product, conduct theme, and supervised entity, and surfaces the patterns that warrant a thematic review. The complaint text never leaves the central bank.

The Bank for International Settlements positioned this direction explicitly with its 2025 Project AISE announcement, an Innovation Hub initiative to build an AI Supervisor Enhancer for prudential authorities, and with its broader 2025 paper on AI for policy purposes. Both make clear that AI for supervision is a 2026 expectation, not an experiment.

Architecture posture for a CBO or SAMA-class regulator

The defensible architecture for a sovereign banking supervisor has four properties.

  • Hardware on the regulator's own land. An institutional-tier appliance racked inside the central bank's data centre, not a colocation facility, not a cloud region. The full stack from the GPU up is owned by the central bank.
  • Open-weight models held as files. Gemma, Qwen, DeepSeek-R1 distillates, and Falcon Arabic for the Arabic supervisory corpus, all loaded from local storage. No call leaves the perimeter, no licence ties the regulator to a vendor's continued goodwill, and the model file itself is auditable.
  • Indexes built inside the perimeter. The prudential-returns warehouse, the supervisory manual, the prior-letters corpus, and the complaints text are indexed by tools the central bank operates. The retrieval layer is part of the regulator's own information architecture.
  • Operating staff accountable to the Governor. The model-risk file, the prompt-template library, the evaluation suite, and the audit logs are owned by named central-bank officers. Vendors support, but cannot operate, the system.

This posture is the same architectural pattern Hosn deploys for the broader sovereign-AI use case described in on-premise AI for sovereign institutions in Oman and the GCC, scoped to the central-bank workload. Pricing for sovereign-class deployments is by quotation, sized to concurrency, return volume, and Arabic-corpus depth.

If your central bank or supervisory authority is scoping a 2026 roll-out and wants a private walk-through of how the architecture maps to your specific reporting regime, email [email protected] for a one-hour briefing. Mu'een, Oman's national shared-AI platform, addresses a different layer of the public-sector AI stack; for prudential supervision, the requirement is a regulator-controlled on-premise system.

Frequently asked

Can a central bank use a public LLM service if it strips bank names from the prompts?

No. Anonymisation is brittle on prudential data. The combination of asset size, geography, sector mix, and reporting period uniquely identifies a regulated bank in almost every market. Once the model has the numbers, the supervised entity is reconstructible. The defensible posture is that regulated returns never leave the central bank's perimeter, which means on-premise inference.

Where does AI add real value in prudential supervision today?

Three places. First, anomaly detection on regulatory returns to flag outlier ratios, period-on-period jumps, and inconsistencies across forms before the supervisor reads the file. Second, narrative drafting from peer comparison, where the model writes the first pass of the supervisory letter grounded in peer-group statistics. Third, complaint triage and consumer-protection signal extraction across high-volume free-text channels.

How do BIS and FSB position AI in supervision?

The BIS Innovation Hub launched Project AISE in 2025 to build an AI assistant for financial supervisors capable of automating key tasks and enhancing on-site supervision. The FSB's 2024 stocktake on AI in financial services calls on national authorities to enhance supervisory capabilities including by leveraging AI-powered tools, while addressing third-party concentration, model risk, and data governance vulnerabilities.

What does a credible architecture look like for a CBO or SAMA-class regulator?

Hardware on land the central bank controls, open-weight models the central bank holds as files, indexes built from prudential returns and supervisory manuals inside the perimeter, and operating staff accountable to the Governor. Anomaly engines, retrieval-augmented copilots, and complaint triage all run on the same on-premise stack, with full audit logs that survive parliamentary or judicial scrutiny.