Why Self-Host?

Backups, Risk, and “Bus Factor”

  • Many see off-site, encrypted, regularly tested backups as the true hard part of self‑hosting.
  • Approaches include restic/kopia to cloud storage, ZFS send, NAS‑to‑NAS replication, and simple rsync of Docker volumes.
  • Some treat self‑hosting as a secondary backup to cloud services; others make self‑hosting primary and cloud “cold storage.”
  • Concern extends beyond data: what happens if the operator is unavailable and no one else can maintain or restore the system?

Defining Self‑Hosting

  • Debate over scope: strict view requires owning/controlling the machine; looser view includes VPSs and “just not SaaS.”
  • Some argue VPS is still “someone else’s computer” (hypervisor access, physical control); others focus on software control and portability as the key.
  • Distinction drawn between “self‑hosting” (you manage the software) and “homelab” (you also own/manage hardware).

Email as a Special Case

  • Email widely acknowledged as the hardest thing to self‑host reliably due to spam reputation, big‑provider whitelisting, and complex DNS/auth (SPF/DKIM/DMARC).
  • Common compromise: self‑host receiving/archiving, outsource sending to Gmail/SES/SendGrid or similar.
  • Some report long‑term success with fully self‑hosted mail; others describe significant deliverability pain and eventually give up.

Motivations Beyond Privacy/Sovereignty

  • Cost: for many workloads, a single reasonably powerful box (or cheap VPS) beats cumulative SaaS bills.
  • Performance: LAN speeds and local CI runners can be dramatically faster than cloud.
  • Customization and stability: control over upgrades, avoiding product shutdowns, and tailoring stacks.
  • Learning/professional development: being responsible for “real” infra is seen as uniquely educational.

Hardware and Deployment Patterns

  • Wide spectrum: old PCs, NUCs, SBCs (Raspberry Pi, ODroid), NAS appliances, up to Threadripper workstations and colo boxes.
  • Containers + a reverse proxy (often Docker + Caddy/nginx) are the dominant pattern; some use Proxmox, Kubernetes, or NixOS/FreeBSD.
  • Opinions differ sharply on Pi‑class hardware (from “great starter” to “exercise in frustration”).

Security, Access, and Exposure

  • Mesh VPNs (e.g., Tailscale‑like tools) and tunnels are seen as a major enabler: expose little or nothing directly to the internet.
  • Some rely on cloud WAF/CDN fronts; others refuse because it reintroduces a central intermediary.
  • Attitudes to risk vary: a few minimize OS‑level hardening needs; others strongly warn about silent compromise and botnets.

Complexity and Accessibility

  • Recurrent theme: running a reliable service (backups, upgrades, monitoring) is a different commitment than a fun weekend project.
  • There’s nostalgia for “next‑next‑finish” installers and frustration that modern self‑hosting often demands Docker, TLS, VPNs, and routing knowledge.
  • Package formats and “one‑click” platforms (snaps, specialized distros, Coolify, Yunohost, etc.) are cited as partial answers, but not yet mass‑friendly.

What People Actually Self‑Host

  • Commonly mentioned: photos (Immich), media servers, RSS readers, password managers (with caveats), file sync (Nextcloud, Syncthing), notes, analytics, small business stacks, and even full SaaS infrastructure.
  • Many avoid self‑hosting truly critical services (email, family photos) unless they have strong backup and failover confidence.

Philosophical and Social Threads

  • Some tie self‑hosting to free‑software ideals and resistance to cloud lock‑in; others see it as overkill versus “just having fewer digital dependencies.”
  • Idea of one technical person running services for family/friends as a way to build both digital sovereignty and real‑world community appears repeatedly.