Redis is fast – I'll cache in Postgres
Benchmark design and interpretation
- Many commenters say the setup measures round-trip HTTP latency, not true DB/cache throughput; Redis is bottlenecked by the HTTP layer while Postgres maxes its 2 cores.
- Criticisms: default configs for both, tiny values, no pipelining, homelab hardware (possibly networked storage), unclear indexes/UUID type. Some call the results misleading for serious capacity planning.
- Others defend benchmarking defaults because many production systems run them, and note the article clearly states it’s about “fast enough,” not peak performance.
- Several want a mixed workload benchmark (simple cache hits plus complex queries) and “unthrottled” runs to see where each saturates.
Postgres as a cache: viability and mechanics
- Multiple anecdotes show Postgres key–value lookups in ~1ms vs Redis ~0.5ms; many consider that difference negligible once network latency is included.
- Common pattern: UNLOGGED tables for cache data, optional WAL tweaks, simple schema with an expiry timestamp; some use pg_cron, triggers, or partition dropping for cleanup.
- Concerns: cache queries can contend with primary DB workload, exacerbate CPU/connection exhaustion, and degrade exactly when the DB is under stress.
- Debate over UNLOGGED: losing cache on crash can cause a thundering herd against primary tables; others answer that a cache by definition can be rebuilt.
Redis and dedicated caches
- Supporters emphasize built-in TTL, eviction policies, simple ops, and high throughput (especially with pipelining and local sockets).
- Some teams report Redis as extra operational burden compared to “just Postgres”; others say Redis has cost them minutes of ops time over years.
- Several argue native TTL in Postgres would eliminate a lot of unnecessary Redis deployments.
When to add Redis (or any extra service)
- Strong theme: start with Postgres or even in-process memory caches; add Redis only once you have clear performance or capacity issues.
- Others warn against assuming you don’t need low latency; removing a working Redis setup purely for ideological “simplicity” also has a cost.
- Broader takeaway: under modest load (single-digit thousands of RPS), Postgres-as-cache is often sufficient, and over-engineered, multi-service stacks are common.