OpenAI for Germany is positioned as a sovereign European offering – operated via Delos Cloud on Microsoft Azure. But critics see gaps: even if customer data is not to be used for training, control, dependencies and exit capability remain key risk factors. For organisations with high compliance and data protection requirements, this means that jurisdiction, lock-in and genuine alternatives must also be taken into account.
Germany is finally getting "sovereign AI". That sounds good at first - after control, security and independence. However, the current debate surrounding "OpenAI for Germany" shows that the decisive factor is not where something runs or is hosted, but who actually controls the tech stack, which jurisdiction it is subject to and, ultimately, whether companies really have realistic alternatives in an emergency. The topic of "OpenAI for Germany" primarily - but not only - concerns public authorities. Companies also need to take a close look and ask themselves the following questions: What technical dependencies are measurable and how big are they? Which incidents make these risks tangible? And what pragmatic steps can, or must, we take now to be truly "sovereign" in technical and legal terms and to be able to maintain business operations in an emergency?
Why even the BMI is warning against US hyperscalers
Germany wants to bring AI into the administration - "sovereignly". "OpenAI for Germany" is to be launched from 2026: SAP and OpenAI, supported by the SAP subsidiary Delos Cloud.
The catch is the technical basis: Delos Cloud relies on Microsoft Azure technology.
Where experts see gaps in the topic of "European AI sovereignty"
If even the state (with its secrecy protection, BSI requirements and public procurement law) ends up relying on US technology as a foundation, it is high time for a ruthless reality check for all industries with sensitive data - from healthcare and law to finance and critical infrastructure. And the criticism is fierce: Apfeltalk interprets "OpenAI for Germany" less as a sovereign European AI solution and more as a case of sovereignty washing:
Authorities would buy a German shell via SAP/Delos, but run OpenAI models under the hood via Microsoft Azure. The problem: control over central administrative processes would be transferred to a US tech stack. The author describes this as a "Trojan horse" because we would be "handing over the key to our own administration". In the case of AI training, even if it is promised that data will not be used for training, the "technological dependency remains total": Europe supplies usage, data streams and budgets, while the roadmap and value creation end up with US providers.
The assessment is correspondingly harsh: "The path to digital immaturity" is leading German to a fatal and, above all, completely unnecessary new dependency in the key future field of AI. Or, to paraphrase Apfeltalk: "US IT is the new Russian gas!"
In addition: data center in Germany is not enough
Many discussions still revolve around the location ("hosted in Europe" = fits). In practice, however, something else is decisive: control, jurisdiction and exit capability. As already explained, the German government is well aware of the dangers: the Federal Ministry of the Interior (BMI) expressly warns that the CLOUD Act may oblige US providers to hand over data, even if it is stored outside the USA.
FISA Section 702 not only permits the targeted surveillance of non-US persons outside the USA, but also expressly obliges communications service providers to actively cooperate. Microsoft Azure is of course no exception here. To put it bluntly: the text leaves absolutely no room for interpretation along the lines of: "Nothing is eaten as hot as it is cooked." AI is by no means exempt from this legal interpretation.
What is already a reality: lock-in and resilience
Three incidents already show the risks of US dependency:
Field of action 1:
Companies may have officially achieved full "compliance" - and still be at an operational standstill if their operations are dependent on a few global platforms.
Field of action 2:
Dependency is often invisible until end of support, security vulnerability or forced update suddenly triggers a crisis.
Field of action 3:
Exit is not a business case for the IT department's PowerPoint archive, but a company-wide strategy project with responsibilities, processes, training, migration, change - and costs.
What companies should do now at the latest
The 9-point checklist for real data sovereignty
Conclusion: The questions that every authority and every company will have to ask themselves in 2026 - and cannot delegate to "IT" alone:
- When dealing with AI, do we want "compliance" as a label on paper or on the website in order to work through checklists and take the supposedly easy, "no alternative" route?
- Or do we want to anchor sovereignty as a genuine component of our business model in order to actually make our data secure at all levels (legal, technical, political)?
To put it bluntly: the requirements for genuine "data sovereignty" have increased significantly in recent years - far beyond many common "compliance" requirements. The debate will intensify further with the future topic of AI. "OpenAI for Germany" shows once again that only when the tech stack of a company or public authority is built technically and legally in such a way that it is not only absolutely secure against unauthorized access, but that the companies also remain operationally capable of acting in an emergency - only then is "sovereign" more than just a label.