Most European AI workloads today run on US infrastructure. For unregulated use cases that's fine. For customer data, regulated processes, and anything touching GDPR Article 9 categories — it's a slow-motion compliance bomb.

The CLOUD Act (2018) gives US authorities the right to compel US-based providers to hand over data — including data physically stored in Europe. The 2023 EU-US Data Privacy Framework partially addresses this, but it's been challenged twice already in front of the CJEU. Schrems I and II both fell. Schrems III is being argued.

If you're a European bank, insurer, telecom, healthcare provider, or public-sector contractor running AI on AWS us-east-1 or Azure East US, your DPO already knows this is fragile. Most CTOs don't.

The "but it's hosted in Europe" trap

Hyperscaler regions in Frankfurt, Dublin, Paris are still operated by US legal entities. Same CLOUD Act exposure. The only structures that escape it are:

  • European-owned providers (OVHcloud, Scaleway, IONOS, Hetzner)
  • Sovereign-cloud joint ventures (Bleu = Microsoft + Capgemini + Orange; S3NS = Google + Thales)
  • On-premise or private cloud you control

Each has tradeoffs. Sovereign clouds are still maturing on AI tooling. European providers don't yet match hyperscaler depth on managed ML services. On-prem is heavy.

What actually matters for your AI workload

Three questions to ask, in order:

1. What data feeds the model? If it's anonymous telemetry, hyperscale is fine. If it's PII, health records, financial transactions, or anything you wouldn't print on a billboard — sovereignty matters.

2. Is the inference itself regulated? Credit decisions, medical diagnoses, insurance pricing, hiring — all increasingly subject to AI Act requirements that prefer auditable, locally-controlled inference.

3. What's your data residency posture in your customer contracts? Many B2B contracts now require EU-only processing. Running inference on a US LLM API quietly violates dozens of contracts most CTOs never read.

Practical paths

For text/chat AI: open-weight models (Mistral, Llama family) hosted on European infrastructure. Quality gap with GPT-4-class models is real but shrinking. For specific verticals (legal, medical), the gap is already negligible.

For embeddings, classification, anomaly detection: the model layer is commoditized. Run it where you control it.

For training: this is the hardest. Real GPU capacity in Europe is scarce and expensive. Most teams train on hyperscalers and deploy sovereignly. That's a defensible compromise if your training data is non-sensitive.

The shift coming

By 2027, European companies in regulated sectors will be required (under the AI Act + sectoral regulations) to demonstrate jurisdictional control over high-risk AI systems. The companies starting that migration now will have time to do it well. The ones starting in 2027 will do it badly under deadline pressure.