Using SQLite as storage for web server static content

High-level reaction

  • Many find the idea interesting and fun to experiment with, but see it as niche.
  • Strong split: some like SQLite as a “filesystem abstraction”; many think ordinary filesystems plus standard deployment patterns are simpler and more robust.

Atomic updates, versioning, and rollbacks

  • SQLite transactions for updating many files at once are praised for:
    • Atomic deployments across multiple apps.
    • Easy rollbacks by switching versions in the DB.
  • Several argue the same effect is trivial with:
    • Symlink or directory swap, temporary files + rename, tar archives.
    • Git worktrees or filesystem snapshots (ZFS/Btrfs) with dedup and compression.
  • Multiple comments note atomic server-side swaps do not solve client-side version skew:
    • Browsers fetch assets via separate requests; can see mixed old/new resources.
    • Content-hash/versioned asset URLs and keeping old versions available are seen as the real solution.
  • OP clarifies the target is internal, multi-app, blue-green-style deployments where DB-centric versioning simplifies management.

Performance and scalability

  • SQLite proponents cite:
    • Fewer syscalls, user-space caching, good read concurrency, WAL mode.
    • Prior SQLite benchmarks claiming it can be faster than the filesystem for lots of small files.
  • Skeptics respond:
    • Modern web servers use sendfile, io_uring, DMA, etc.; these likely outperform DB-based serving for large/static sites.
    • Independent benchmarks in the thread show:
      • Similar performance at low throughput.
      • SQLite up to ~2.3× slower at high throughput for static file serving.
    • Concerns about write locks during large blob transactions, though WAL reduces this.

Deduplication, metadata, and compression

  • Pro-DB arguments:
    • Easy to store hashes, metadata, and multiple compressed variants (Brotli/gzip/plain) in tables.
    • Dedup across versions and apps via content-hash primary keys.
    • SQL queries (sometimes with type-safe query builders) give powerful ways to search and manipulate “files”.
  • Counterpoints:
    • Filesystems (ZFS/Btrfs, hash-based stores, hard links) can also provide dedup, compression, and snapshots.
    • Deduplication logic is risky: bugs in reference counting or cleanup could delete assets used by many apps.
    • Custom compression inside SQLite loses benefits of filesystem-level tools (e.g., transparent compression and search).

Operational concerns, backups, and portability

  • Backup story is contested:
    • Some say SQLite is easier to snapshot and replicate (e.g., with streaming tools).
    • Others note that incremental backups and compaction are more complex than rsync/tar of plain files.
  • SRE-style objections:
    • Harder to inspect “what the server is serving” compared to browsing a directory tree.
    • Single DB file as potential single point of failure; dedup layer as another critical complexity.
  • Portability is cited as a reason for SQLite:
    • Same approach works on Linux/macOS/Windows without relying on advanced FS features.
  • For HA/multi-node, sharing a single SQLite file is problematic; the stated plan is to move to a shared Postgres in that mode.

Alternative uses and related experiments

  • Multiple examples of SQLite-as-storage beyond this project:
    • Game assets packed into SQLite for fast mobile loads.
    • Map tiles and media (e.g., MBTiles, plugins serving PNGs from DBs).
    • Static-site CMS that edits content in SQLite then emits static files.
    • Scientific computing workloads using read-only SQLite on RAM disks.
  • Overall, many see SQLite-backed storage as great for specialized or local/internal workloads, but not as a general replacement for filesystem-based static hosting.