Back to Insights

AI sovereignty is won in the execution layer

Middle powers don’t have to choose between US and Chinese AI stacks. A controllable, auditable orchestration layer keeps your critical data, models, and workflows under your own laws.

Cornelia Kutterer

In today's fractured geopolitical landscape, AI and cloud sovereignty is no longer just a technical or regulatory question — it's a race for power, leverage, and independence. Middle powers — countries that are not global hegemons but still have significant regional or economic weight — are increasingly recasting the conversation in these terms: not merely which technologies they adopt, but how far they can escape structural reliance on infrastructures ultimately shaped by great‑power interests, especially where their most sensitive AI‑driven systems in defence, health, finance, and national security are concerned. Enterprises, particularly those operating across jurisdictions and critical sectors, are confronting strikingly similar questions.

The Moment Middle Powers Start Asking the Right Question

Following discussions at the World Economic Forum, the Munich Security Conference, and the recent AI Impact Governance Summit in India, the debate around AI sovereignty has increasingly been reframed through the lens of these middle powers. It is no longer simply about which large language model is most capable, or whether a government should build a sovereign compute cluster. It is a more fundamental question: who can control the stack that runs our most sensitive AI workflows?

Across Europe, Southeast Asia, the Gulf, and Latin America, a similar realisation is taking shape: the last two decades of growing digital dependency have created structural exposures that sit uneasily with a more fragmented, risk‑aware world.

For many middle powers, the opposite extreme — building the entire technology stack from chips to frontier models overnight — is not realistic, at least not in the short term. Global trade, interoperability, and access to the highest‑quality technologies remain essential to growth, competitiveness, and security. The goal is the ability to participate in global ecosystems in terms that protect core sovereign interests, not autarky.

This is why the real question is not "US or China" versus "go it alone", but how to sequence and structure dependencies over time. Middle powers need a strategy that increases resilience first — diversifying vendors, localising control over the most sensitive layers, and insisting on substitutability — and then uses that resilience as a springboard to deeper independence where it matters most. In that trajectory, sovereignty becomes a path: from smarter dependence to credible influence over the entire stack. For globally active enterprises in regulated sectors, the same logic applies at corporate scale: resilience and optionality first, deeper proprietary capability over time.

At the same time, public sentiment and boardroom surveys point in the same direction: the issue is less whether AI should exist, and more who controls when and how it is used. People and institutions do not want maximal AI; they want AI that operates under rules they can see, shape, and, if necessary, switch away from.

A February 2026 Chatham House research paper put it plainly:

The critical question is not whether to depend on other nations for AI, but on which nations, to what extent and on what terms.

How Middle Powers Can Weather US and Chinese AI Dominance, Chatham House, February 2026

The Path Towards More Control Goes Through the Execution Layer

This insight is reshaping how forward‑thinking middle powers should approach AI procurement. The frontier models — GPT‑5, Gemini 3, Claude 4‑series, Mistral Large, Llama 3, or the next open‑weight release — are increasingly a commodity. Powerful models are proliferating; for most use cases, the marginal value of controlling model weights is declining as long as you can switch between best‑in‑class options. That does not mean model capability is irrelevant, or that Europe can or should ignore frontier AI altogether. The Commission's new Frontier AI Grand Challenge, launched under the EU's Apply AI Strategy, explicitly aims to close the strategic gap in high‑end models and make large‑scale European systems available as open resources across the continent.

What is not a commodity is the layer that governs how those models are deployed: which data they can touch, which workflows they can modify, who can authorise an action, and what audit trail is produced. That orchestration layer is where sovereignty lives.

If your orchestration platform is itself a foreign SaaS product or tightly coupled to a single hyperscaler, you have not achieved sovereignty. You have merely outsourced the sovereignty question one level up. As Europe moves towards a new cloud and AI sovereignty package, this is the part of the stack that will matter most for middle powers: how governance, data flows, and infrastructure control at the orchestration layer are understood and operationalised in practice.

Wiring Sovereignty into Your AI Stack

Scrydon was designed precisely to solve this problem — initially for Europe, but with an architecture that is inherently global, because the problem is inherently global.

Scrydon is a Sovereign Agentic AI platform: a flow‑based orchestration layer that runs on infrastructure you control, in jurisdictions you choose, under the data‑governance rules you define. It is not a model. It is not a cloud. It is the execution environment that connects your models, your data, and your teams — while keeping every decision, action, and data access inspectable and auditable by your own people.

For a middle power deploying AI in sensitive domains, this translates directly:

  • Run on your own compute — on‑premises, in a national data centre, or in a sovereign cloud. Scrydon does not require a connection to any foreign infrastructure.
  • Bring your own models — whether that is a European open‑weight model, a nationally fine‑tuned variant, or a commercially licensed frontier model accessed through a data‑processing agreement that satisfies your legal requirements.
  • Enforce your data boundaries — Scrydon's Data architecture ensures that sensitive datasets are never moved outside the boundaries you specify. Data sovereignty is structural, not a policy checkbox.
  • Maintain full auditability — every agent action, every model call, every workflow execution produces a tamper‑evident log that your own compliance and security teams can inspect. This is the foundation of accountable AI for public‑sector applications.
  • Avoid lock‑in at every layer — because the platform is built on open standards and supports pluggable model providers, your investment in orchestration logic, workflow design, and governance policy is portable.

Agency to Steer in a Changing Order

Look a decade out in almost any plausible scenario exercise and one theme recurs: middle powers cannot rely on a stable, benign international order. Whether the world trends towards a more sinocentric system, a volatile US‑led order, a fragile G2, or a messier landscape of competing blocs and non‑state actors, the external environment becomes less predictable, not more. In that context, sovereignty is no longer a nice‑to‑have principle; it is a precondition for having any real agency at all.

The question is not whether states or enterprises can insulate themselves from global interdependence, but how to carve out a path through it that leaves them with real choices when conditions change. That path runs through where they build room for manœuvre in the stack: where they can change providers without crisis, shift workloads without asking permission, and enforce their own rules even as the balance of power around them shifts. That is what "strategic resilience" looks like when translated into architecture.

This is why tech sovereignty keeps surfacing as a prerequisite in discussions about Europe's future role in the world order — and in boardrooms of large enterprises that must operate across jurisdictions. Sovereignty at the orchestration layer — the ability to decide which models, which clouds, and which data boundaries apply to which workflows, and to change those decisions over time — is one of the few levers that remains available even in adverse geopolitical scenarios. It is also the lever that can be pulled fastest, compared with long‑cycle investments in fabs, chips, or home‑grown frontier models.

Scrydon builds the orchestration environment in which those choices can be made, combined, and revised without rebuilding the entire system each time. The same architecture is available to any organisation — government, enterprise, or critical infrastructure — that has decided it needs to build resilience and choice into their stack to secure its most sensitive AI systems.

If your government or organisation is now grappling with these sovereignty questions, this is a good moment to factor them into how your AI architecture evolves. Introducing options at the orchestration layer — on providers, locations, and controls — will simply give you more room to adjust as technology and geopolitics continue to shift.

— Cornelia