Deploy OCI applications from Kubernetes Pod YAML manifests.
sdme can create containers from Kubernetes Pod YAML manifests without requiring Kubernetes, Docker, Podman, or any OCI runtime. Everything is wired through sdme and systemd: OCI images are pulled directly from registries and run as systemd services inside nspawn containers.
Kubernetes Pod YAML describes one or more OCI images to run as isolated services (environment variables, volumes, probes) in a single file that sdme parses and deploys. This is not the same as the sdme pod networking feature.
See also the architecture documentation for implementation details.
sdme kube apply reads a Pod (or Deployment) YAML, pulls the
specified OCI images, builds a combined rootfs on a base OS, and
starts a single nspawn container with one systemd service per OCI
image. All services in the pod share localhost, just like in
Kubernetes.
Create a file called nginx-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: nginx
image: nginx
The base rootfs can be any supported distribution. Import one if you haven't already (Ubuntu for example):
sudo sdme fs import ubuntu docker.io/ubuntu
Deploy it:
sudo sdme kube apply -f nginx-pod.yaml --base-fs ubuntu --hardened --network-zone=kube
This pulls the nginx image, builds a rootfs called kube-my-nginx
on top of the ubuntu base, starts the container with user namespace
isolation and its own network, and drops you into a shell.
Inside the container, you can verify the nginx service with standard systemd commands:
systemctl status sdme-oci-nginx.service
journalctl -u sdme-oci-nginx.service
Exit the shell with Ctrl+D; the container keeps running. From
the host, you can still check the logs:
sudo sdme logs my-nginx --oci nginx
Short image names like redis or nginx are resolved using the default_kube_registry config (default: docker.io). Fully qualified names like quay.io/nginx/nginx-unprivileged are used as-is. To use a different default registry: sudo sdme config set default_kube_registry registry.example.com
All containers on the same network zone can reach each other by hostname. You can use any supported distribution here (Arch Linux for example):
sudo sdme fs import archlinux docker.io/lopsided/archlinux
Create a regular container on the kube zone and curl the nginx
pod:
sudo sdme new myclient -r archlinux --hardened --network-zone=kube
Inside the client container:
curl http://my-nginx
This works because --network-zone uses LLMNR for automatic
hostname discovery between containers in the same zone. Any sdme
container (kube or regular) can join the zone and communicate
with the others.
This example deploys PostgreSQL on a Fedora base and shows how to configure it using environment variables, secrets, and configmaps, the same way you would in Kubernetes.
Import Fedora if you haven't already:
sudo sdme fs import fedora quay.io/fedora/fedora
The simplest approach puts the password directly in the YAML:
apiVersion: v1
kind: Pod
metadata:
name: my-db
spec:
containers:
- name: postgres
image: postgres
env:
- name: POSTGRES_PASSWORD
value: "secret"
Here we use kube create instead of kube apply to build the pod
without starting it or dropping into a shell, then start it
separately:
sudo sdme kube create -f db-pod.yaml --base-fs fedora --hardened --network-zone=kube
sudo sdme start my-db
sudo sdme logs my-db --oci postgres
This works, but the password is visible in the YAML file.
Create a secret to keep the password out of the YAML:
sudo sdme kube secret create db-credentials --from-literal=password=secret
Then reference it in the pod:
apiVersion: v1
kind: Pod
metadata:
name: my-db
spec:
containers:
- name: postgres
image: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
Configuration that isn't sensitive can go in a configmap. For example, to set the default database name:
sudo sdme kube configmap create db-config --from-literal=dbname=myapp
apiVersion: v1
kind: Pod
metadata:
name: my-db
spec:
containers:
- name: postgres
image: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: db-config
key: dbname
Since the database is on the kube zone, any other container on
the same zone can reach it. Create a Debian client container:
sudo sdme fs import debian docker.io/debian:stable
sudo sdme new dbclient -r debian --hardened --network-zone=kube
Inside the client container, install the PostgreSQL client and connect by hostname:
apt-get update && apt-get install -y postgresql-client
PGPASSWORD=secret psql -h my-db -U postgres -d myapp
sudo sdme kube secret ls
sudo sdme kube secret rm db-credentials
sudo sdme kube configmap ls
sudo sdme kube configmap rm db-config
sdme kube delete stops and removes both the container and its
generated rootfs:
sudo sdme kube delete my-nginx
To avoid repeating --base-fs on every kube command:
sudo sdme config set default_base_fs ubuntu
Then --base-fs can be omitted:
sudo sdme kube apply -f nginx-pod.yaml --hardened --network-zone=kube
All examples in this tutorial use --network-zone=kube, which
gives each container its own network namespace with automatic DNS
between containers in the same zone. Containers are reachable by
IP from the host (use sdme ps to find the address).
The Kubernetes hostNetwork: true field is supported and keeps the
container on the host network.
See the network configuration tutorial for details on each mode.
sdme supports a subset of the Kubernetes Pod spec:
env, envFrom)sdme kube secret, sdme kube configmap)hostNetwork, --network-veth, --network-zone,
--network-bridge, --portFor the full list of supported fields, see sdme kube apply --help.