The Saudi Data & AI Authority is the national reference for AI governance in the Kingdom. SDAIA does not operate as a single regulation — it is an entire architecture: AI Ethics Principles, Generative AI Guidelines, the AI Adoption Framework, and a layered set of sectoral instruments under SAMA, SFDA, DGA and CST. The instruments are predominantly non-binding — but for organisations selling into government or operating at scale in regulated sectors, alignment is fast becoming a procurement precondition. Saudi Arabia declared 2026 the Year of AI; the demand for structured AI governance has never been higher.
SDAIA's AI governance architecture is best understood as a layered stack rather than a single regulation. The PDPL forms the binding floor — substantive privacy law with full enforcement. SDAIA's ethics, generative AI and adoption instruments sit on top as non-binding guidance — increasingly treated as de facto procurement standards. ISO 42001 (which SDAIA itself holds since July 2024) provides the certifiable management-system layer. Each layer addresses a different risk and a different audience; together, they form the working governance baseline for AI in the Kingdom.
Royal Decree M/19. Substantive personal data law. Full enforcement since 14 September 2024. Underwrites every privacy obligation in any AI system processing personal data.
BindingFirst issued 2023, expanded 2025 to twelve principles. The values framework — non-binding but increasingly tested in procurement and government engagement contexts.
Non-bindingTwo-track instrument issued January 2024 — one for government entities, one for the public. Content authenticity, watermarking, hallucination management, deepfake controls.
Non-bindingReleased at GAIN Summit, 12 September 2024. Four-level maturity model plus four enablers. The structural roadmap for scaling AI across an organisation, public or private.
Non-bindingSDAIA itself certified July 2024 — the first national AI authority globally to do so. International standard; structurally aligned with ISO 27001 and ISO 27701. Increasingly expected of suppliers to KSA government.
Certifiable"Non-binding" is doing a lot of work in this stack. SDAIA's ethics, generative AI and adoption instruments are formally guidance — but they are increasingly treated as de facto procurement requirements for organisations bidding into government tenders, joining public-private AI initiatives, or operating at scale in sectors with active regulators (SAMA, SFDA, CST). The practical question for most organisations is not "must we comply" but "can we afford the procurement and reputational consequences of not aligning."
SDAIA first issued AI Ethics Principles in 2023 with seven principles. The 2025 expansion (the v2.0 document published on SDAIA's portal as AI Ethics Principles) broke compound principles into more granular constituents and added integrity as a standalone principle. The result is twelve principles covering the full ethical spectrum that AI systems engage with — from technical reliability to human dignity to environmental responsibility. The principles apply to all AI stakeholders within KSA: public, private, non-profit, researchers, civil society.
AI systems should operate with honesty and ethical conduct. New as a standalone principle in v2.0.
AI systems should be free from inappropriate bias; outcomes should not disadvantage protected groups.
Personal data used by AI systems should respect individuals' privacy rights — anchored to PDPL obligations.
AI systems should be secure against unauthorised access, manipulation, or compromise. Cybersecurity baseline.
AI systems should perform consistently under expected conditions — predictable behaviour and outcomes.
AI systems should not cause physical, psychological, or social harm — proactive risk management embedded.
AI system behaviour, capabilities, and limitations should be visible to those affected by their decisions.
AI decisions should be explicable in human-understandable terms — not just transparent but comprehensible.
Organisations should be accountable for the AI systems they deploy — clear ownership, traceable decisions.
Designers, developers and operators bear responsibility for AI outcomes — accountability operationalised.
AI should serve human dignity, autonomy, and wellbeing — humans in the loop where stakes are high.
AI deployments should produce net benefit to society and the environment — sustainability as a design constraint.
The Principles are formally non-binding but practically authoritative. SDAIA's powers include measuring compliance of in-scope entities and auditing AI ethics activities through the defined compliance mechanism, with support from sector regulators. Non-compliance does not directly trigger Principle-specific penalties but can trigger enforcement under PDPL, sectoral regulations, or contractual obligations — particularly in government procurement. The pragmatic posture: treat the Principles as binding for any deployment that matters.
SDAIA released the Generative AI Guidelines in January 2024 as a two-track instrument — one version for government entities, one for the general public. Where the Ethics Principles cover all AI systems at the values level, the GenAI Guidelines address the specific operational risks introduced by generative systems: hallucinations, deepfakes, content authenticity, watermarking, prompt injection, and the human-oversight requirements that follow. They are the most operationally specific of SDAIA's non-binding instruments and the most actively referenced in vendor risk frameworks across KSA.
The four operational risk patterns the Guidelines address explicitly. Each represents a category of harm that pre-2022 AI guidance was silent on but that LLMs and diffusion models have made universal.
Generated content should be identifiable as such. Watermarking, metadata tagging, and transparent disclosure to end users that AI-generated material is not human-produced.
Generative systems produce plausible but factually incorrect outputs. Required: factual verification protocols, confidence-calibration, human review on high-stakes decisions.
Synthetic audio, video, and imagery can be weaponised against identity, reputation, and democratic processes. Detection, provenance tracking, response procedures required.
Where generative AI outputs feed decisions affecting individuals — credit, hiring, healthcare, legal — human-in-the-loop review is mandated. Automation depends on stakes.
Government entities are the primary audience. The "for Government" track is the more detailed and operationally specific of the two — and the one most relevant for organisations supplying government clients. The "for Public" track is consumer-facing guidance on responsible use. For private-sector organisations operating outside government supply chains, the for-Government version is still the more useful reference; it shows where SDAIA's expectations land for organisations under formal scrutiny.
SDAIA released the AI Adoption Framework at the third Global AI Summit (GAIN) on 12 September 2024 — and it has rapidly become the default reference for organisations scaling AI in KSA. The framework structures adoption around two axes: a four-level maturity model showing where an organisation is today and where it could be, and four enablers showing what needs to be built to move between levels. Twenty-three government entities have established AI Offices within the framework; private-sector adoption continues to accelerate under the 2026 Year of AI directive.
Initial AI awareness. Limited or experimental AI initiatives. No formal governance. AI not yet integrated into strategic priorities.
AI capabilities being built. Pilot projects in production. Initial governance practices forming. AI Office or equivalent structure beginning to emerge.
AI integrated across business functions. Mature governance, ethics review processes, established AI Office. Organisation-wide capability infrastructure in place.
AI is a strategic differentiator. Continuous improvement, advanced research, sectoral leadership. AI as a source of competitive advantage and societal contribution.
Quality data as the foundation for AI models. Data governance, classification, accessibility, lineage, lawful basis for use. The PDPL alignment lives here.
Compute, infrastructure, model platforms, MLOps tooling, integration architecture. Sovereign cloud and on-prem options for sensitive workloads.
Talent, training, organisational structure, AI Office staffing. Saudisation alignment, capacity building, role definition for AI-specific functions.
Ethics integration, governance committees, ongoing review processes, alignment with SDAIA Principles. The governance fabric that holds the rest accountable.
SDAIA sets the national AI governance framework, but sectoral regulators add their own AI-specific requirements within their domains. The four most active in late 2025 / early 2026 — SAMA, SFDA, DGA and CST — each operate at the intersection of their existing regulatory mandate and SDAIA's national framework. For organisations operating in regulated industries, the sectoral overlay typically dominates day-to-day compliance work; SDAIA framework alignment is an additional layer rather than a substitute.
AI risk assessments mandatory for AI deployments in regulated financial services. Model risk management, fairness in credit decisions, anti-bias requirements, robustness testing. Aligned with SDAIA Principles plus Basel-style model governance.
AI in medical devices subject to SFDA regulatory oversight. Software-as-a-medical-device (SaMD) classification, clinical evaluation, post-market surveillance. AI-specific updates to medical device frameworks during 2024-2025.
AI deployment in government context. Cross-references SDAIA Generative AI Guidelines for government track. Reference architecture for AI Offices across 23+ government entities. Procurement standard for vendors.
Cloud computing, data centres, ICT services regulation — including environments where AI systems are hosted or deployed. Cloud Computing Regulatory Framework directly applicable to AI infrastructure.
A draft AI Hub Law went out for public consultation in April 2025. Despite the title, that draft did not propose a comprehensive AI regulatory framework — it introduced the concept of data embassies, allowing foreign entities to host data in KSA under their own governing legislation. The draft has not been enacted. A more comprehensive AI law remains under development; in the interim, the SDAIA framework plus sectoral overlays plus PDPL form the working governance baseline. Practitioners should monitor 2026 closely — the Year of AI directive may accelerate primary legislation.
SDAIA's AI governance architecture cannot be separated from Saudi Arabia's broader strategic context. The National Strategy for Data & AI (NSDAI) makes AI central to Vision 2030's economic diversification agenda. The Kingdom has invested heavily in sovereign AI infrastructure — data centres, large language models with strong Arabic capability, and partnerships with global hyperscalers under sovereign cloud arrangements. Declaring 2026 the Year of AI is not symbolic; it is a directive to accelerate adoption across the entire public sector at scale.
The Year of AI declaration formalises what was already accelerating: government-wide AI adoption, sovereign infrastructure investment, and the maturation of SDAIA's governance architecture from advisory to procurement-grade. For organisations supplying government, the practical effect is that AI governance posture is now a procurement filter.
SDAIA's international footprint is also expanding. The Riyadh Charter on Artificial Intelligence for the Islamic World — developed jointly by ICESCO, SDAIA and the Saudi National Commission — was unanimously approved by all 53 ICESCO Islamic member states in March 2025. ICAIRE, the International Center for AI Research and Ethics in Riyadh, was recognised by UNESCO as a Category 2 centre. Together, these signal that SDAIA's framework is being positioned not just as KSA national governance but as a reference model with reach across the Islamic world and through UNESCO channels.
SDAIA alignment work splits into three broad activity types: ethics-and-principles integration, generative-AI operational governance, and adoption-framework maturity build. Most engagements combine all three. The deliverables are interconnected — an organisation cannot meaningfully claim Adoption Framework Level 3 maturity without demonstrating Ethics Principles integration across its AI portfolio, and the Generative AI Guidelines apply specifically to the substantial proportion of AI use that involves LLMs and diffusion models.
Diagnostic against the full SDAIA stack — Ethics Principles v2.0, Generative AI Guidelines, AI Adoption Framework. Maturity-level baseline determination. The starting map of where the organisation actually stands.
Comprehensive inventory of AI systems in operation — production, pilot, vendor-supplied. Classification by criticality, processing of personal data, generative vs. discriminative, regulated-sector exposure. The foundation for all subsequent governance work.
Operationalisation of the twelve Ethics Principles across the AI portfolio. Per-system assessment, embedding in design and review processes, tracking of principle conformance. Documentation that holds up to SDAIA-style review.
Specific controls for generative AI systems: watermarking, content provenance, hallucination management, human-oversight protocols, prompt-injection defences. Aligned to SDAIA Generative AI Guidelines (for-Government track).
Design and stand-up of an AI Office consistent with the Adoption Framework's institutional model. Governance committee structure, role definitions, decision-rights, integration with existing CISO / DPO functions.
Plan to move from current Adoption Framework level to target level. Investment requirements across the four enablers, sequencing, milestones, governance signals. The capability-build plan.
Where the organisation operates under SAMA, SFDA, DGA or CST, the sectoral overlay specific to its domain. Coordination with sector-specific AI risk assessment requirements, model governance frameworks, regulatory sandboxes.
Where AI processes personal data, the binding PDPL obligations apply in full. Lawful basis for training data, automated decision-making controls, transparency to data subjects, cross-border training-data governance. The intersection with the certified privacy program.
Where the organisation pursues ISO 42001 certification — increasingly expected of government suppliers — alignment of SDAIA work with ISO 42001 management-system requirements. Certifiable layer over the SDAIA-aligned governance fabric.
Vendor due diligence framework for AI suppliers. SDAIA-alignment requirements in supplier contracts. Third-party AI risk assessment. Particularly important for organisations consuming AI-as-a-service rather than building internally.
Board-grade reporting infrastructure on AI governance posture. SDAIA-aligned narrative, maturity progression, principal risk register, sectoral regulator engagement status. The communication layer that justifies the investment.
SDAIA AI governance work is delivered through one of three engagement shapes — depending on whether the organisation is starting fresh, building toward ISO 42001 certification, or maintaining post-build governance. KSA's 2026 Year of AI directive has materially accelerated demand across all three shapes; lead times for substantive engagements have lengthened.
For organisations needing to understand where they stand against the full SDAIA stack. Diagnostic across Ethics Principles, GenAI Guidelines, Adoption Framework. Maturity-level determination, gap inventory, prioritised remediation roadmap. Most common entry point.
For organisations building a SDAIA-aligned AI governance program from scratch or as part of an ISO 42001 certification path. AI Office stand-up, ethics integration, generative AI controls, maturity progression to target level. Frequently delivered alongside ISO 42001 build.
For organisations with established AI governance needing senior backup on harder questions — sectoral regulator engagement, novel AI deployments, generative AI policy decisions, ISO 42001 surveillance audit support, board-grade narrative shaping. Block-hour retainer.
Common questions on SDAIA AI governance in early 2026 — particularly from organisations evaluating whether SDAIA alignment is worth the investment, organisations supplying KSA government and finding AI governance becoming a procurement requirement, and organisations needing to understand how SDAIA, PDPL and ISO 42001 fit together.
SDAIA's framework is the working AI governance baseline for any organisation operating in KSA, supplying KSA government, or expanding into the Kingdom under the 2026 Year of AI directive. A 30-minute scoping call costs nothing — we will tell you honestly where your AI governance posture stands against the SDAIA stack, what alignment work is realistic given your timeline, and whether ISO 42001 certification should be on your roadmap.
Schedule a call