SQLite-on-the-server is misunderstood: Better at hyper-scale than micro-scale
Partitioned SQLite on the server (Durable Objects / Turso style)
- Discussion centers on “SQLite-per-partition” (e.g., per chat room, per tenant, per user) as a scalable model similar to Cassandra/DynamoDB partitioning.
- Many workloads (chat, social feeds, B2B SaaS partitioned by organization) map well when there’s a clear ownership hierarchy for records.
- Some argue that with very low QPS per partition, read replicas are often unnecessary; others expect vendors to add them anyway.
Global state and cross-partition queries
- Biggest caveat: anything requiring global tables (user sessions, email uniqueness, user search, admin analytics).
- Patterns mentioned:
- A separate global DB (often Postgres or similar) for metadata, auth, billing, etc.
- Pushing metrics/changes into a data warehouse (ClickHouse, OLAP, DWH via CDC) for cross-tenant analytics.
- Accepting eventual consistency with queues/workflows for multi-write paths.
- Some say you can always “patch” global requirements with more layers, but complexity accumulates; global state tends to reappear somewhere.
Replication, consistency, and read patterns
- Classic primary/read-replica problems (dirty reads after writes) discussed; mitigations include temporary routing to primary, session-based replica selection, or sending updated data back with the write response.
- For per-partition SQLite, vendors are expected to provide read-replica stories over time.
Local-first, sync, and backups
- Strong interest in simple “sync SQLite to cloud” for local-first apps: whole-DB snapshots to S3, versioning, point-in-time restore.
- Tools and ideas mentioned: Litestream (with concerns about maintenance and backup deletion authority), SQLite session extension (changesets, conflict handlers), CR-SQLite (CRDT extension), Replicache/Zero/Evolu, sqlsync.dev, PowerSync, Dolt.
- Multi-device offline-first is seen as hard: effectively AP multi-master; CRDTs help but may clash with strict relational invariants.
Comparisons to Postgres/MySQL sharding and DuckDB
- One line of argument: manual sharding with consistent hashing isn’t that painful; overhead is comparable to managing many SQLite DBs.
- Author’s counterpoint: automatic partitioning/rebalancing (as in some SQLite-on-server products) can reduce custom logic and operational burden.
- DuckDB is highlighted as a different beast: in-memory, columnar OLAP; vastly faster for analytics workloads, but not a replacement for OLTP/transactional use.
Operational experiences, reliability, and hype
- Reports of SQLite significantly outperforming Postgres for single-writer, simple-key workloads; others note tuning can change this.
- Some have seen SQLite file corruption in dev; others insist corruption is extremely rare if APIs and safety settings are used correctly.
- SQLite’s lack of classic MVCC is cited as a pain point in multiuser environments; MVCC-extended variants exist but their fidelity to upstream is unclear.
- Opinions split on “distributed SQLite” hype: some see it as a promising, simpler alternative to massive clusters; others compare it to the NoSQL hype cycle and warn it’s only right for a narrow set of use cases.