Framework · SDAIA AI Governance · KSA

SDAIA — the Kingdom's national AI governance architecture.

The Saudi Data & AI Authority is the national reference for AI governance in the Kingdom. SDAIA does not operate as a single regulation — it is an entire architecture: AI Ethics Principles, Generative AI Guidelines, the AI Adoption Framework, and a layered set of sectoral instruments under SAMA, SFDA, DGA and CST. The instruments are predominantly non-binding — but for organisations selling into government or operating at scale in regulated sectors, alignment is fast becoming a procurement precondition. Saudi Arabia declared 2026 the Year of AI; the demand for structured AI governance has never been higher.

National authority SDAIASaudi Data & AI Authority
Core instruments ThreeEthics · Generative AI · Adoption Framework
Strategic anchor Vision 2030NSDAI national strategy
Latest milestone 2026 Year of AIYear of AI declared by KSA
01 — The KSA AI governance architecture

Five layers. One national stack.

SDAIA's AI governance architecture is best understood as a layered stack rather than a single regulation. The PDPL forms the binding floor — substantive privacy law with full enforcement. SDAIA's ethics, generative AI and adoption instruments sit on top as non-binding guidance — increasingly treated as de facto procurement standards. ISO 42001 (which SDAIA itself holds since July 2024) provides the certifiable management-system layer. Each layer addresses a different risk and a different audience; together, they form the working governance baseline for AI in the Kingdom.

01
Foundation · Binding Personal Data Protection Law (PDPL)

Royal Decree M/19. Substantive personal data law. Full enforcement since 14 September 2024. Underwrites every privacy obligation in any AI system processing personal data.

Binding
02
Ethical layer · Non-binding AI Ethics Principles v2.0

First issued 2023, expanded 2025 to twelve principles. The values framework — non-binding but increasingly tested in procurement and government engagement contexts.

Non-binding
03
Operational guidance Generative AI Guidelines

Two-track instrument issued January 2024 — one for government entities, one for the public. Content authenticity, watermarking, hallucination management, deepfake controls.

Non-binding
04
Adoption framework AI Adoption Framework

Released at GAIN Summit, 12 September 2024. Four-level maturity model plus four enablers. The structural roadmap for scaling AI across an organisation, public or private.

Non-binding
05
Certifiable layer ISO/IEC 42001:2023 — AI Management System

SDAIA itself certified July 2024 — the first national AI authority globally to do so. International standard; structurally aligned with ISO 27001 and ISO 27701. Increasingly expected of suppliers to KSA government.

Certifiable

"Non-binding" is doing a lot of work in this stack. SDAIA's ethics, generative AI and adoption instruments are formally guidance — but they are increasingly treated as de facto procurement requirements for organisations bidding into government tenders, joining public-private AI initiatives, or operating at scale in sectors with active regulators (SAMA, SFDA, CST). The practical question for most organisations is not "must we comply" but "can we afford the procurement and reputational consequences of not aligning."

02 — AI Ethics Principles v2.0

Twelve principles. One values framework.

SDAIA first issued AI Ethics Principles in 2023 with seven principles. The 2025 expansion (the v2.0 document published on SDAIA's portal as AI Ethics Principles) broke compound principles into more granular constituents and added integrity as a standalone principle. The result is twelve principles covering the full ethical spectrum that AI systems engage with — from technical reliability to human dignity to environmental responsibility. The principles apply to all AI stakeholders within KSA: public, private, non-profit, researchers, civil society.

P-01

Integrity

AI systems should operate with honesty and ethical conduct. New as a standalone principle in v2.0.

P-02

Fairness

AI systems should be free from inappropriate bias; outcomes should not disadvantage protected groups.

P-03

Privacy

Personal data used by AI systems should respect individuals' privacy rights — anchored to PDPL obligations.

P-04

Security

AI systems should be secure against unauthorised access, manipulation, or compromise. Cybersecurity baseline.

P-05

Reliability

AI systems should perform consistently under expected conditions — predictable behaviour and outcomes.

P-06

Safety

AI systems should not cause physical, psychological, or social harm — proactive risk management embedded.

P-07

Transparency

AI system behaviour, capabilities, and limitations should be visible to those affected by their decisions.

P-08

Interpretability

AI decisions should be explicable in human-understandable terms — not just transparent but comprehensible.

P-09

Accountability

Organisations should be accountable for the AI systems they deploy — clear ownership, traceable decisions.

P-10

Responsibility

Designers, developers and operators bear responsibility for AI outcomes — accountability operationalised.

P-11

Humanity

AI should serve human dignity, autonomy, and wellbeing — humans in the loop where stakes are high.

P-12

Social & environmental benefit

AI deployments should produce net benefit to society and the environment — sustainability as a design constraint.

The Principles are formally non-binding but practically authoritative. SDAIA's powers include measuring compliance of in-scope entities and auditing AI ethics activities through the defined compliance mechanism, with support from sector regulators. Non-compliance does not directly trigger Principle-specific penalties but can trigger enforcement under PDPL, sectoral regulations, or contractual obligations — particularly in government procurement. The pragmatic posture: treat the Principles as binding for any deployment that matters.

03 — Generative AI Guidelines

The specific rules for generative systems.

SDAIA released the Generative AI Guidelines in January 2024 as a two-track instrument — one version for government entities, one for the general public. Where the Ethics Principles cover all AI systems at the values level, the GenAI Guidelines address the specific operational risks introduced by generative systems: hallucinations, deepfakes, content authenticity, watermarking, prompt injection, and the human-oversight requirements that follow. They are the most operationally specific of SDAIA's non-binding instruments and the most actively referenced in vendor risk frameworks across KSA.

SDAIA · January 2024

Generative AI risks the Guidelines actually name.

The four operational risk patterns the Guidelines address explicitly. Each represents a category of harm that pre-2022 AI guidance was silent on but that LLMs and diffusion models have made universal.

For Government For Public

Content authenticity

Generated content should be identifiable as such. Watermarking, metadata tagging, and transparent disclosure to end users that AI-generated material is not human-produced.

Hallucination management

Generative systems produce plausible but factually incorrect outputs. Required: factual verification protocols, confidence-calibration, human review on high-stakes decisions.

Deepfake & synthetic media

Synthetic audio, video, and imagery can be weaponised against identity, reputation, and democratic processes. Detection, provenance tracking, response procedures required.

Human oversight on consequential decisions

Where generative AI outputs feed decisions affecting individuals — credit, hiring, healthcare, legal — human-in-the-loop review is mandated. Automation depends on stakes.

Government entities are the primary audience. The "for Government" track is the more detailed and operationally specific of the two — and the one most relevant for organisations supplying government clients. The "for Public" track is consumer-facing guidance on responsible use. For private-sector organisations operating outside government supply chains, the for-Government version is still the more useful reference; it shows where SDAIA's expectations land for organisations under formal scrutiny.

04 — AI Adoption Framework

Four levels. Four enablers.

SDAIA released the AI Adoption Framework at the third Global AI Summit (GAIN) on 12 September 2024 — and it has rapidly become the default reference for organisations scaling AI in KSA. The framework structures adoption around two axes: a four-level maturity model showing where an organisation is today and where it could be, and four enablers showing what needs to be built to move between levels. Twenty-three government entities have established AI Offices within the framework; private-sector adoption continues to accelerate under the 2026 Year of AI directive.

The four maturity levels

Level 01 1

Emerging

Initial AI awareness. Limited or experimental AI initiatives. No formal governance. AI not yet integrated into strategic priorities.

Level 02 2

Developing

AI capabilities being built. Pilot projects in production. Initial governance practices forming. AI Office or equivalent structure beginning to emerge.

Level 03 3

Proficient

AI integrated across business functions. Mature governance, ethics review processes, established AI Office. Organisation-wide capability infrastructure in place.

Level 04 4

Advanced

AI is a strategic differentiator. Continuous improvement, advanced research, sectoral leadership. AI as a source of competitive advantage and societal contribution.

The four enablers

D Enabler · 01

Data

Quality data as the foundation for AI models. Data governance, classification, accessibility, lineage, lawful basis for use. The PDPL alignment lives here.

T Enabler · 02

Technology

Compute, infrastructure, model platforms, MLOps tooling, integration architecture. Sovereign cloud and on-prem options for sensitive workloads.

H Enabler · 03

Human capabilities

Talent, training, organisational structure, AI Office staffing. Saudisation alignment, capacity building, role definition for AI-specific functions.

R Enabler · 04

Responsible use

Ethics integration, governance committees, ongoing review processes, alignment with SDAIA Principles. The governance fabric that holds the rest accountable.

05 — Sectoral overlay

Four regulators. Four overlays.

SDAIA sets the national AI governance framework, but sectoral regulators add their own AI-specific requirements within their domains. The four most active in late 2025 / early 2026 — SAMA, SFDA, DGA and CST — each operate at the intersection of their existing regulatory mandate and SDAIA's national framework. For organisations operating in regulated industries, the sectoral overlay typically dominates day-to-day compliance work; SDAIA framework alignment is an additional layer rather than a substitute.

Sector · Finance SAMA

Saudi Central Bank

Banking · Insurance · Fintech

AI risk assessments mandatory for AI deployments in regulated financial services. Model risk management, fairness in credit decisions, anti-bias requirements, robustness testing. Aligned with SDAIA Principles plus Basel-style model governance.

Sector · Health SFDA

Saudi Food & Drug Authority

Healthcare AI · Medical devices

AI in medical devices subject to SFDA regulatory oversight. Software-as-a-medical-device (SaMD) classification, clinical evaluation, post-market surveillance. AI-specific updates to medical device frameworks during 2024-2025.

Sector · Government DGA

Digital Government Authority

Public administration · GovTech

AI deployment in government context. Cross-references SDAIA Generative AI Guidelines for government track. Reference architecture for AI Offices across 23+ government entities. Procurement standard for vendors.

Sector · ICT CST

Communications, Space & Technology Commission

Cloud · ICT · Telecom

Cloud computing, data centres, ICT services regulation — including environments where AI systems are hosted or deployed. Cloud Computing Regulatory Framework directly applicable to AI infrastructure.

A draft AI Hub Law went out for public consultation in April 2025. Despite the title, that draft did not propose a comprehensive AI regulatory framework — it introduced the concept of data embassies, allowing foreign entities to host data in KSA under their own governing legislation. The draft has not been enacted. A more comprehensive AI law remains under development; in the interim, the SDAIA framework plus sectoral overlays plus PDPL form the working governance baseline. Practitioners should monitor 2026 closely — the Year of AI directive may accelerate primary legislation.

06 — Vision 2030 & the Year of AI

Strategic context.

SDAIA's AI governance architecture cannot be separated from Saudi Arabia's broader strategic context. The National Strategy for Data & AI (NSDAI) makes AI central to Vision 2030's economic diversification agenda. The Kingdom has invested heavily in sovereign AI infrastructure — data centres, large language models with strong Arabic capability, and partnerships with global hyperscalers under sovereign cloud arrangements. Declaring 2026 the Year of AI is not symbolic; it is a directive to accelerate adoption across the entire public sector at scale.

Saudi Arabia · 2026 Year of AI

A national directive for AI at scale.

The Year of AI declaration formalises what was already accelerating: government-wide AI adoption, sovereign infrastructure investment, and the maturation of SDAIA's governance architecture from advisory to procurement-grade. For organisations supplying government, the practical effect is that AI governance posture is now a procurement filter.

$56bn / yr Projected productivity gains
23+ Government AI Offices established
53 ICESCO states · Riyadh Charter

SDAIA's international footprint is also expanding. The Riyadh Charter on Artificial Intelligence for the Islamic World — developed jointly by ICESCO, SDAIA and the Saudi National Commission — was unanimously approved by all 53 ICESCO Islamic member states in March 2025. ICAIRE, the International Center for AI Research and Ethics in Riyadh, was recognised by UNESCO as a Category 2 centre. Together, these signal that SDAIA's framework is being positioned not just as KSA national governance but as a reference model with reach across the Islamic world and through UNESCO channels.

07 — What the work looks like

Eleven workstreams SDAIA alignment actually requires.

SDAIA alignment work splits into three broad activity types: ethics-and-principles integration, generative-AI operational governance, and adoption-framework maturity build. Most engagements combine all three. The deliverables are interconnected — an organisation cannot meaningfully claim Adoption Framework Level 3 maturity without demonstrating Ethics Principles integration across its AI portfolio, and the Generative AI Guidelines apply specifically to the substantial proportion of AI use that involves LLMs and diffusion models.

W-01

SDAIA gap assessment

Diagnostic against the full SDAIA stack — Ethics Principles v2.0, Generative AI Guidelines, AI Adoption Framework. Maturity-level baseline determination. The starting map of where the organisation actually stands.

W-02

AI inventory & classification

Comprehensive inventory of AI systems in operation — production, pilot, vendor-supplied. Classification by criticality, processing of personal data, generative vs. discriminative, regulated-sector exposure. The foundation for all subsequent governance work.

W-03

Ethics Principles integration

Operationalisation of the twelve Ethics Principles across the AI portfolio. Per-system assessment, embedding in design and review processes, tracking of principle conformance. Documentation that holds up to SDAIA-style review.

W-04

Generative AI controls

Specific controls for generative AI systems: watermarking, content provenance, hallucination management, human-oversight protocols, prompt-injection defences. Aligned to SDAIA Generative AI Guidelines (for-Government track).

W-05

AI Office establishment

Design and stand-up of an AI Office consistent with the Adoption Framework's institutional model. Governance committee structure, role definitions, decision-rights, integration with existing CISO / DPO functions.

W-06

Maturity progression roadmap

Plan to move from current Adoption Framework level to target level. Investment requirements across the four enablers, sequencing, milestones, governance signals. The capability-build plan.

W-07

Sectoral regulator engagement

Where the organisation operates under SAMA, SFDA, DGA or CST, the sectoral overlay specific to its domain. Coordination with sector-specific AI risk assessment requirements, model governance frameworks, regulatory sandboxes.

W-08

PDPL · AI intersection

Where AI processes personal data, the binding PDPL obligations apply in full. Lawful basis for training data, automated decision-making controls, transparency to data subjects, cross-border training-data governance. The intersection with the certified privacy program.

W-09

ISO 42001 readiness

Where the organisation pursues ISO 42001 certification — increasingly expected of government suppliers — alignment of SDAIA work with ISO 42001 management-system requirements. Certifiable layer over the SDAIA-aligned governance fabric.

W-10

Procurement & supplier governance

Vendor due diligence framework for AI suppliers. SDAIA-alignment requirements in supplier contracts. Third-party AI risk assessment. Particularly important for organisations consuming AI-as-a-service rather than building internally.

W-11

Board & executive reporting

Board-grade reporting infrastructure on AI governance posture. SDAIA-aligned narrative, maturity progression, principal risk register, sectoral regulator engagement status. The communication layer that justifies the investment.

08 — How this is delivered

Three engagement shapes.

SDAIA AI governance work is delivered through one of three engagement shapes — depending on whether the organisation is starting fresh, building toward ISO 42001 certification, or maintaining post-build governance. KSA's 2026 Year of AI directive has materially accelerated demand across all three shapes; lead times for substantive engagements have lengthened.

09 — Common questions

Things people ask on first call.

Common questions on SDAIA AI governance in early 2026 — particularly from organisations evaluating whether SDAIA alignment is worth the investment, organisations supplying KSA government and finding AI governance becoming a procurement requirement, and organisations needing to understand how SDAIA, PDPL and ISO 42001 fit together.

If SDAIA's instruments are non-binding, why align?
Three reasons. First, procurement: KSA government tenders increasingly screen for SDAIA alignment as a precondition; without it, bids do not progress. Second, sectoral regulators (SAMA, SFDA, DGA, CST) have integrated SDAIA's expectations into their own binding requirements, so non-binding at the SDAIA level becomes binding at the sector level. Third, the 2026 Year of AI directive signals that AI law is on the horizon — organisations that align now are positioned for the binding regime when it arrives, rather than scrambling. The pragmatic question is rarely "must we comply" but "can we afford the procurement and reputational consequences of not aligning."
How does SDAIA AI governance relate to PDPL?
PDPL is binding privacy law; SDAIA AI governance is non-binding guidance that explicitly references and reinforces PDPL. Where AI processes personal data, PDPL obligations apply in full — lawful basis, automated decision-making restrictions, cross-border transfers, breach notification. SDAIA's Ethics Principles include Privacy as a standalone principle anchored to PDPL. The practical pattern: PDPL is the binding floor, SDAIA the additional ethics-and-governance overlay. Organisations cannot use SDAIA alignment to substitute for PDPL compliance, but they can use it to demonstrate substantive AI privacy posture beyond minimum PDPL requirements. Read the KSA PDPL page for the binding privacy side.
Is ISO 42001 certification required?
Not formally — but increasingly expected of suppliers to KSA government. SDAIA itself certified to ISO 42001 in July 2024, becoming the first national AI authority globally to do so; that move signalled where the Kingdom expects its supplier ecosystem to land. Government tenders increasingly cite ISO 42001 (or equivalent AI management-system certification) as a procurement criterion. For private-sector organisations operating outside government supply chains, ISO 42001 is currently optional; for any organisation in or adjacent to government work, it is rapidly becoming non-optional in practice. Treat it as the certifiable layer that completes the SDAIA-aligned governance fabric.
Did the AI Ethics Principles really change in 2025?
Yes — meaningfully. The original 2023 issuance had seven principles structured around compound concepts (e.g., "Privacy & Security" as one principle). The 2025 v2.0 expansion decomposed compound principles into more granular constituents and added Integrity as a new standalone principle, arriving at twelve principles total. The substance has not changed dramatically — the values framework is essentially consistent — but the granularity makes the principles more operationally testable. Each of the twelve now maps to discrete controls and review questions, where the original seven required interpretation to operationalise. Organisations on the 2023 version should refresh against v2.0 but rarely need to rebuild from scratch.
Are the Generative AI Guidelines mandatory?
Formally non-binding; practically expected for any organisation deploying generative AI in KSA at scale. The "for Government" track is the more detailed and operationally specific version, and the more relevant reference for organisations supplying government clients. For private-sector deployments, the Guidelines are the most concrete operational guidance available in KSA on hallucination management, watermarking, deepfake controls, and human-oversight thresholds. Sectoral regulators reference them in their AI risk assessment requirements. Organisations not aligned to the Guidelines are exposed both to procurement blocks and to enforcement risk under sector-specific binding rules that effectively codify the Guidelines' substance.
What does "AI Office" actually mean in the Adoption Framework?
A formal organisational structure — typically a small, senior team — responsible for AI strategy, governance and capability development across the organisation. SDAIA introduced the AI Office concept in the Adoption Framework and announced 23+ government entities had established AI Offices by late 2024. In practice an AI Office combines executive sponsorship (usually a C-level head), governance authority (sets policy, reviews high-stakes deployments), capability development (talent, training, vendor selection), and cross-functional coordination (data, IT, business units, legal/privacy). For private sector, the structure is most often a virtual team chaired by a CIO or CTO with cross-functional representation; for government, it is increasingly a formal unit with dedicated headcount.
Will there be a binding KSA AI law?
Likely — though the timeline is uncertain. A draft "AI Hub Law" went out for public consultation in April 2025 but introduced data-embassy concepts rather than a comprehensive AI regulatory framework, and has not been enacted. A more comprehensive AI law remains under development. The 2026 Year of AI directive may accelerate the legislative process, but practitioners should not assume a comprehensive AI law in 2026; the more likely pattern is incremental sectoral expansion (more SAMA, SFDA, DGA AI-specific requirements) plus eventual primary AI legislation in 2027-2028. Organisations aligning to SDAIA's framework now are positioned for that legislation when it arrives. Monitor SDAIA, the Council of Ministers, and Shura Council publications.
Is SDAIA's framework relevant outside KSA?
Increasingly, yes. The Riyadh Charter on AI for the Islamic World — developed jointly with ICESCO — was approved by all 53 ICESCO Islamic member states in March 2025, positioning SDAIA's framework as a reference model across the Islamic world. ICAIRE in Riyadh has UNESCO Category 2 recognition. For organisations operating across the broader Middle East, North Africa, and Islamic-majority countries beyond, SDAIA's framework increasingly informs adjacent national AI governance frameworks. Practitioners should treat KSA-aligned SDAIA work as portable across this geography in a way that EU AI Act work, for instance, is not.

2026 is the Year of AI. Be ready.

SDAIA's framework is the working AI governance baseline for any organisation operating in KSA, supplying KSA government, or expanding into the Kingdom under the 2026 Year of AI directive. A 30-minute scoping call costs nothing — we will tell you honestly where your AI governance posture stands against the SDAIA stack, what alignment work is realistic given your timeline, and whether ISO 42001 certification should be on your roadmap.

Schedule a call