Many Small Queries Are Efficient in SQLite

Core point: local O(N) queries vs network O(N) queries

  • Many commenters restate the article as: doing O(N) work locally is fine; doing O(N) network round-trips is not.
  • The constant factor “X” (network stack, serialization, RTT) dominates, more than N itself.
  • Even localhost/loopback still pays socket and kernel overhead; SQLite as an in-process library avoids all of that.

Why SQLite isn’t the default for web backends

  • SQLite excels as embedded/local storage but is “not client/server” and has weak write concurrency; that makes it less suitable for multi-node web services.
  • Tooling for remote management, analytics, dashboards, and admin access is seen as clunkier than with Postgres/MySQL; common workaround is SSH + CLI.
  • Some view SQLite as already the de facto standard for embedded use; others note most web stacks still default to Postgres/MySQL, though Rails recently moved toward SQLite by default.

Defaults, typing, and safety

  • Criticism of SQLite’s “insane defaults”: foreign keys off by default, flexible typing, STRICT not default.
  • Defense: strong backward-compatibility promises prevent changing defaults without breaking many deployed apps.
  • Mitigations mentioned: always enabling pragma foreign_keys=on, using STRICT tables, CHECK constraints, and “disciplined” API use.

Query patterns: many small vs one big

  • For OLTP-like record retrieval, many small indexed SELECTs against SQLite can be fine or optimal; the overhead per query is tiny.
  • For analytic workloads or large scans, a single complex query is often better; server databases can cache, optimize across queries, and maintain state in memory.
  • Some skepticism: one index scan is often cheaper than 200 index lookups; others counter that for UI-style record fetches, N+1 can be acceptable with SQLite.
  • Concern: designing for 200 local queries/page can make later migration to a networked DB painful, though some think refactoring hot spots is manageable.

Concurrency, scaling, and performance anecdotes

  • SQLite is praised for read performance: informal reports of 5x faster than Postgres on some workloads, and very high read QPS; another report: ~400 writes/s and ~41k reads/s with WAL.
  • Others hit limits quickly when multiple background workers perform concurrent writes; WAL and tuning are crucial, but SQLite explicitly discourages high write concurrency.
  • Sharding-per-customer with SQLite and using distributed systems like rqlite or Cloudflare D1 are discussed as scaling patterns, but synchronization complexity is acknowledged.

Deployment, backups, and ecosystem

  • Simple patterns: nightly backups by stopping writes and copying the DB file; more robust: .backup/.dump or tools like sqlite3_rsync and Litestream replication.
  • Caveats: beware storage characteristics (e.g., EBS latency); many small queries can still hurt if backing storage is slow.
  • There is experimentation with SQLite in WASM in the browser (e.g., LiveStoreJS), and SQLite-based tooling (Fossil, custom VFS like kvvfs) reinforces the “SQLite everywhere” theme.