Why did containers happen?

Packaging, Dependencies, and “It Works on My Machine”

  • Many comments frame containers as a workaround for Linux packaging and dependency hell, especially for Python, Ruby, Node, and C/C++-backed libraries.
  • Traditional distro repos and global system packages are seen as fragile: one package manager for “everything” (system + apps + dev deps) makes conflicts and upgrades risky.
  • Containers let developers ship all runtime deps together, sidestepping distro maintainers, glibc quirks, and multiple incompatible versions on one host.
  • Several argue Docker’s real innovation was not isolation but the image format + Dockerfile: a reproducible, shareable artifact that fixes the “works on my machine” problem by “shipping the machine.”

Resource Utilization and OS-Level Isolation

  • Another origin story is cost efficiency at scale: cgroups and namespaces arose to bin-pack heterogeneous workloads on commodity hardware (e.g., search/ads style loads).
  • Containers are lighter than full VMs, enabling many workloads per host while sharing the kernel.
  • Commenters trace a long lineage: mainframe VMs → HP-UX Vault, FreeBSD jails, Solaris zones, OpenVZ/Virtuozzo, Linux-VServer, LXC; Docker mainly popularized what already existed.

What Docker Added

  • Key contributions called out:
    • Layered images over overlay filesystems for fast rebuilds.
    • A simple, limited DSL (Dockerfile) for building images.
    • A public registry (Docker Hub) and later similar registries.
    • Docker Compose for multi-service dev/test setups and easy local databases.
  • This combination made containers accessible to solo devs and SMBs, not just big infra teams.

Security, Alternatives, and Philosophical Debates

  • Disagreement on intent: some see containers as “sandboxing,” others as primarily about namespacing/virtualization, not strong security.
  • Escapes are effectively kernel 0days; serious multi-tenant providers still rely on VMs or additional layers.
  • Long subthreads debate Unix’s “fundamentally poor” vs “fundamentally good” security model, capabilities (e.g., seL4), unikernels, and whether a small verified microkernel could eventually displace Linux for cloud workloads.

Complexity, Critiques, and Evolution

  • Several criticize the modern container/k8s ecosystem as overcomplicated: YAML sprawl, container networking/logging pain, and orchestration overhead just to run simple services.
  • Others emphasize the upside: explicit declaration of ports, volumes, config, and immutable, versioned images make deployment, rollback, and migration vastly easier.
  • Overall consensus: containers “happened” where traditional Unix packaging, global state, and VM-heavy workflows failed to keep up with modern, fast-moving, dependency-rich software.