Replacing Kubernetes with systemd (2024)

Lightweight alternatives to Kubernetes for small deployments

  • Many commenters agree Kubernetes is overkill for single-node or tiny hobby workloads; its RAM/CPU overhead and operational complexity don’t justify the benefits.
  • Common “simpler stack” patterns:
    • systemd units + native packages (deb/rpm) on a VM, sometimes managed by Ansible.
    • Docker or Podman + docker‑compose, often fronted by nginx, Traefik, or Caddy.
    • Git-based deployment scripts (ssh/scp or Ansible) for idempotent updates.
  • Several tools aim to bring orchestration-like ergonomics without full K8s:
    • Podman + systemd (or Quadlet).
    • CapRover, Coolify, Dokploy, Harbormaster, Kamal.
    • Nomad (though now non‑OSS), Docker Swarm (viewed as abandonware but still used), Portainer.
    • Newer projects like skate and uncloud try to give multi-host or K8s‑compatible UX on simpler backends.

Podman, Quadlet, and systemd-based container management

  • Podman + systemd is widely used for homelabs and small servers; Quadlet files are praised as “set and forget” container units directly in systemd.
  • Tools like podlet convert compose or Kubernetes manifests into Quadlet units; some deploy K8s YAML via Podman to keep a familiar API.
  • Debate over rootless vs rootful containers:
    • Rootless Podman has quirks (e.g., preserving client IP, lingering users).
    • Some recommend rootful Podman with userns=auto as a simpler, secure compromise.
  • Systemd’s User=/DynamicUser= don’t integrate cleanly with Podman yet; workarounds look messy.

Systemd capabilities and controversies

  • Supporters highlight systemd’s breadth: service units, timers (as a better cron), mount units (vs fstab), nspawn/vmspawn containers, homed, run0, and powerful sandboxing.
  • Critics argue it violates “small tools” philosophy, is too complex for PID 1, and forces opaque behaviors (network bring‑up, mounts, journald).
  • Journald in particular is criticized for performance, binary format, and awkward journalctl UX compared to grepping text logs.
  • There’s lingering resentment over early breakage and perceived maintainer attitude, though many concede systemd+units are now hard to avoid and often useful.

Kubernetes suitability, variants, and homelab experiences

  • Some want “Kubernetes API, single cheap VPS” and feel stuck reinventing Deployments, Ingress, CronJobs with ad‑hoc scripts and compose.
  • Others report success with k3s/k0s/microk8s on small or homelab clusters, emphasizing:
    • Good experience once set up, especially with plain YAML and minimal add‑ons.
    • But still notable memory/IO from kubelet + etcd/sqlite, and complexity not worth it for one box.
  • Philosophical split:
    • One side: “Use K8s everywhere; you’ll eventually grow into it.”
    • Other side: “Use the simplest thing that works; most apps don’t truly need K8s-level coordination.”

Redundancy, updates, and state management

  • systemd+Podman/compose setups often handle updates via Ansible or scripts that:
    • Stop services, snapshot volumes (e.g., btrfs), deploy new config, health‑check, and auto‑rollback including persistent volumes.
  • With K8s, rolling updates and zero‑downtime are native, but equivalent volume‑level rollback requires CSI snapshot tooling and custom automation.
  • For redundancy without schedulers, some prefer simple replication across machines rather than dynamic rescheduling.

Cloud and infrastructure cost considerations

  • A recurring pain point is fitting orchestration into $5–10/month VPS constraints (1 vCPU, 2 GB RAM).
  • Suggestions:
    • Use cheaper/denser providers (Hetzner, Netcup, Contabo) or Oracle Cloud’s generous ARM free tier, though Oracle’s free accounts have horror‑story lockouts.
  • Several argue the time spent chasing ultra‑cheap hosting and heavy tooling outweighs just paying a bit more or simplifying the stack.

Historical and experimental orchestration tools

  • CoreOS fleet is remembered as an early “distributed systemd” precursor to K8s; some still experiment with it.
  • BlueChi (ex‑Hirte) offers multi‑node systemd control; Talos Linux and Aurae explored “K8s‑native OS / unified node runtime” ideas.
  • General sentiment: many attempts at “distributed systemd” exist, but for now people mostly pick either full Kubernetes or single‑node systemd/Podman stacks.