Deploying an EU-Sovereign Kubernetes Cluster With OpenStack and Terraform
kubernetessovereigntyinfrastructure

Deploying an EU-Sovereign Kubernetes Cluster With OpenStack and Terraform

UUnknown
2026-02-25
10 min read
Advertisement

Step-by-step guide to build an EU-sovereign Kubernetes control plane on OpenStack with Terraform—secure, auditable, and colocation-ready.

Build an EU‑sovereign Kubernetes control plane on your hardware: quick overview

Pain point: You need a Kubernetes control plane that satisfies strict EU data residency and legal isolation, without depending on hyperscalers' sovereign offerings or risking vendor lock‑in. This guide shows how to provision a legally‑isolated, auditable control plane on your colocation or private OpenStack cloud using Terraform and kubeadm.

This tutorial is written in 2026 and reflects recent shifts: hyperscalers launched sovereign regions in late 2025 and early 2026 (for example, AWS European Sovereign Cloud), but many teams still want independent, verifiable control on premises. The pattern below mimics sovereign‑cloud controls: a separate tenant/project for the control plane, encrypted volumes with customer‑held keys, a dedicated management network, isolated etcd, audited admin access, and predictable automation via Terraform.

What you'll get (high level)

  • A minimal, HA Kubernetes control plane across three dedicated VMs on OpenStack
  • Network isolation: dedicated control plane network and no general internet egress
  • Encrypted control plane storage with customer‑managed KMS (Vault/HSM) within the EU
  • Automated provisioning using Terraform OpenStack provider and cloud‑init
  • Operational recipes for etcd backup/restore, certificate management, and access controls

Why this matters in 2026

In late 2025 and early 2026, the market accelerated toward controlled, sovereign offerings. That trend raised awareness about three practical risks: unknown admin access, cross‑border data flows, and vendor lock‑in. Running your own control plane in a dedicated OpenStack project inside EU borders gives you predictable legal isolation and an auditable boundary. Combine that with robust automation (Terraform + kubeadm) and you get reproducible infrastructure that meets both operational and compliance needs.

Assumptions & prerequisites

  • An OpenStack cloud (colocation or private rack) reachable from your workstation and operating inside the EU region you control
  • Terraform 1.5+ and the official hashicorp/openstack provider
  • Three physical or virtual hosts with at least 4 vCPU, 16 GB RAM, 100 GB disk for control plane VMs (adjust for production)
  • Access to a KMS/HSM or HashiCorp Vault deployed inside the same legal boundary
  • Basic Linux, SSH and Kubernetes (kubeadm) familiarity

Reference architecture

We follow a simple, well‑understood HA control plane topology:

  • Control plane VMs (3) — kube‑apiserver, controller-manager, scheduler, with external etcd cluster running on separate data nodes (or embedded etcd on control plane nodes if preferred)
  • Control network — a dedicated Neutron network and subnet only for control plane traffic
  • Management bastion — a hardened jump host with SSH certificate auth and 2FA
  • Load balancer — Octavia (OpenStack LBaaS) or HAProxy providing a virtual IP for the Kubernetes API
  • Object storage — Cinder/Swift or S3‑compatible endpoint in EU for encrypted etcd snapshots

Step 1 — Create an OpenStack project & policy controls

Start by creating a dedicated OpenStack project (tenant) for the control plane. The project boundary is your primary legal isolation: limit operator access to this project, require MFA for all accounts, and sign contractual guarantees with your colo provider where needed.

  1. Make a tenant/project named k8s‑control.
  2. Create dedicated roles: control‑admin (limited to this project) and control‑auditor (read‑only, logs only).
  3. Enable Cinder encryption for volumes in this project and register a customer KMS (Vault/HSM) located in the EU.

Step 2 — Terraform skeleton (OpenStack provider)

Below is a concise Terraform example to provision the control plane network, three control VMs, and an Octavia load balancer. Save as main.tf. Adjust values for your environment (image IDs, flavor, region).

provider "openstack" {
  auth_url    = var.auth_url
  region      = var.region
  tenant_name = var.project
  user_name   = var.user
  password    = var.password
}

resource "openstack_networking_network_v2" "control_net" {
  name = "k8s-control-net"
}

resource "openstack_networking_subnet_v2" "control_subnet" {
  name       = "k8s-control-subnet"
  network_id = openstack_networking_network_v2.control_net.id
  cidr       = "10.30.0.0/24"
  ip_version = 4
}

resource "openstack_networking_port_v2" "control_port" {
  count  = 3
  name   = "control-port-${count.index}"
  network_id = openstack_networking_network_v2.control_net.id
  fixed_ip { subnet_id = openstack_networking_subnet_v2.control_subnet.id }
}

resource "openstack_compute_instance_v2" "control" {
  count       = 3
  name        = "k8s-control-${count.index}"
  image_name  = var.image
  flavor_name = var.flavor
  key_pair    = var.keypair

  network {
    port = openstack_networking_port_v2.control_port[count.index].id
  }

  block_device {
    uuid                  = openstack_blockstorage_volume_v3.control_vol[count.index].id
    source_type           = "volume"
    destination_type      = "volume"
    boot_index            = 0
    delete_on_termination = true
  }

  metadata = {
    role = "control"
  }
}

resource "openstack_blockstorage_volume_v3" "control_vol" {
  count = 3
  name  = "k8s-control-vol-${count.index}"
  size  = 100
  image_id = var.image
  # Use encrypted volumes through your OpenStack Cinder configuration
  volume_type = "encrypted"
}

resource "openstack_lb_loadbalancer_v2" "apilb" {
  name = "k8s-api-lb"
  vip_subnet_id = openstack_networking_subnet_v2.control_subnet.id
}

resource "openstack_lb_listener_v2" "api_listener" {
  name = "k8s-api-listener"
  loadbalancer_id = openstack_lb_loadbalancer_v2.apilb.id
  protocol = "TCP"
  protocol_port = 6443
}

resource "openstack_lb_pool_v2" "api_pool" {
  name = "k8s-api-pool"
  listener_id = openstack_lb_listener_v2.api_listener.id
  protocol = "TCP"
  lb_algorithm = "ROUND_ROBIN"
}

resource "openstack_lb_member_v2" "api_members" {
  count = 3
  pool_id = openstack_lb_pool_v2.api_pool.id
  address = openstack_networking_port_v2.control_port[count.index].fixed_ip[0].ip_address
  protocol_port = 6443
}

Notes:

  • Use encrypted volumes via an OpenStack Cinder configuration using your KMS. In many deployments you register Vault or an HSM as the encryption provider so keys never leave the EU boundary.
  • Omit floating IPs for control plane VMs. The API is reachable via the LB VIP only.

Step 3 — cloud‑init and kubeadm bootstrap

Use cloud‑init user data to install Docker/containerd and kubeadm. Keep the control plane minimal and deterministic. Example cloud‑init (simplified):

#cloud-config
package_update: true
packages:
  - apt-transport-https
  - ca-certificates
  - curl
runcmd:
  - curl -fsSL https://get.docker.com | sh
  - curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
  - cat </etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
  - apt-get update && apt-get install -y kubeadm=1.28.0-00 kubelet=1.28.0-00 kubectl=1.28.0-00
  - systemctl enable --now kubelet

Important: pin kubeadm/kubelet versions to a known stable release and run offline validation tests before production.

For legal isolation and easier backup/restore, run a small, dedicated etcd cluster on separate nodes or as separate volumes. Example etcdctl snapshot and restore workflow:

# create snapshot (on etcd leader)
etcdctl --endpoints=https://10.30.0.10:2379 \
  --cacert=/etc/etcd/ca.crt --cert=/etc/etcd/server.crt --key=/etc/etcd/server.key snapshot save /var/backups/etcd-$(date +%F).db

# encrypt snapshot locally with age (example)
age -p > /root/agekey.txt # keep private key in HSM/Vault or offline
age -r  /var/backups/etcd-2026-01-01.db > /var/backups/etcd-2026-01-01.db.age

# push to S3-compatible backend inside EU
s3cmd put /var/backups/etcd-2026-01-01.db.age s3://k8s-control-backups/etcd/

Automate this as a cron or systemd timer with retention and rotation. Ensure snapshot storage lives in the same EU jurisdiction and is covered by your legal controls.

Step 5 — kubeadm init with control‑plane endpoint

Generate a kubeadm config pointing the API endpoint to the Octavia VIP. If you used LB IP 10.30.0.100, example:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.28.0
controlPlaneEndpoint: "10.30.0.100:6443"
networking:
  podSubnet: "10.244.0.0/16"

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "10.30.0.11" # control node IP
  bindPort: 6443

Run kubeadm init --config kubeadm-config.yaml on the first control plane node, then join the other control nodes with the generated join command:

kubeadm join 10.30.0.100:6443 --token  \
  --discovery-token-ca-cert-hash sha256: --control-plane --certificate-key 

Step 6 — hardening and operational controls

Make these controls mandatory for a true sovereign posture:

  • Customer‑managed keys: Use Vault/HSM inside the EU to sign TLS certs and manage Cinder encryption keys. Do not allow cloud provider managed keys unless bound by your contract.
  • No outbound egress: Block general internet egress from control plane subnets. Allow only necessary updates via a controlled proxy inside the EU with vetted mirrors.
  • Audited admin access: Force SSH certificate‑based login from a bastion using short‑lived certificates (cfssl/step or Vault SSH), and log all sessions to an audit pipeline (e.g., Elastic/Graylog) with retention policies.
  • Role separation: Operators who manage infrastructure (OpenStack) must be distinct from cluster admins. Use IAM policies to enforce separation and minimum privileges.
  • SBOM & supply chain: Sign your images and adopt SLSA verification for admission into the cluster. This is an increasing regulatory expectation in 2026.

Step 7 — backup, restore, and DR playbook

Etcd is your single most important artifact. A practical DR playbook:

  1. Schedule automatic encrypted etcd snapshots every 15 minutes with retention depending on RPO/RTO.
  2. Store encrypted snapshots in your EU S3/Swift bucket and replicate to a secondary EU site if required.
  3. Test restores quarterly: spin a temporary control plane, restore etcd snapshot, validate API objects, then tear down.
  4. Document the full recovery steps and keep a signed copy in a legal team controlled vault.

Operational tips & automation

  • Use Terraform outputs for control plane IPs and LB VIP and inject them into your kubeadm config via templates — this keeps infra → cluster reproducible.
  • Pin and vendor images for container runtime and kube components in a private registry inside the EU.
  • Use GitOps (Argo CD/Flux) for cluster configuration; keep the Git server within your sovereign perimeter.
  • Automate certificate rotation with Vault PKI and integrate with kubelet/certificate rotation to avoid manual intervention.

Testing & compliance checks

Before declaring the cluster sovereign‑compliant, run:

  • Network egress tests to ensure no traffic leaves the EU boundary.
  • Policy audits (OpenSCAP, kube-bench for CIS benchmarks) and custom policy checks (OPA/Gatekeeper) for legal flags.
  • Log tamper tests: ensure audit logs are immutable and shipped to your SIEM with integrity checks.

Two patterns you should watch in 2026:

  • Confidential Computing integration: Look to hardware features like TEE/SEV/TDX to protect control plane secrets in memory. Many organizations are piloting confidential compute for key components.
  • Federated sovereignty: For multi‑EU jurisdictions, adopt a federated control plane pattern where each country node holds full control of its region while a central policy plane enforces cross‑site governance.

Security checklist (quick)

  • Encrypted volumes with customer keys (Vault/HSM)
  • No floating IPs; API only via LB VIP
  • Isolated project with RBAC and audit-only roles
  • Short‑lived admin credentials and MFA
  • Automated, encrypted etcd snapshots stored inside EU
  • Signed images and supply chain checks
“Sovereignty is not just about geography — it’s about control, visibility and the ability to reproduce your platform on your terms.”

Common pitfalls

  • Relying on provider KMS keys that can be exported outside the EU — insist on customer‑held keys.
  • Overlooking metadata service access from inside VMs — harden the metadata service and limit roles.
  • Not testing restores — backups without rehearsed restores are not a backup.

Example restore snippet

# restore sequence (simplified)
# 1. Create new control plane VMs and LB but do not initialize
# 2. Fetch latest snapshot and decrypt
age -d -i /root/agekey.txt /tmp/etcd-latest.db.age > /tmp/etcd-latest.db

# 3. On a restored etcd node
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-latest.db --data-dir /var/lib/etcd

# 4. Start etcd with the restored data, verify endpoints and bring up kubeadm control plane

Wrap up & takeaways

Deploying a legally‑isolated, EU‑sovereign Kubernetes control plane on OpenStack with Terraform gives you reproducible control, auditable boundaries, and predictable costs compared with third‑party sovereign offerings. Core principles to keep in mind:

  • Project boundary — a dedicated project and tenant is your primary legal isolation.
  • Customer keys — keep KMS and keys under your control inside the EU.
  • Automation — Terraform + cloud‑init + kubeadm ensures repeatable builds and documented restore paths.
  • Audit & test — run continuous policy checks and DR rehearsals.

Next steps (actionable)

  1. Clone this repo template and update provider variables for your OpenStack tenant.
  2. Provision a test control plane in an isolated VLAN and practice a full etcd restore.
  3. Integrate Vault for key management inside the EU and audit the key policy.

Want a ready‑to‑run starter kit? We maintain a production‑grade Terraform + kubeadm reference repo tuned for colocation and OpenStack. It includes CI pipelines for testing restores and an enterprise checklist aligned to 2026 EU compliance expectations.

Call to action

If you’re evaluating sovereign deployments, request a trial of our OpenStack Terraform starter kit or schedule a 45‑minute technical review. We’ll review your legal boundary, provide a tailored architecture, and run a restore test with your backup policy — all within EU controls.

Advertisement

Related Topics

#kubernetes#sovereignty#infrastructure
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-25T23:48:27.069Z