How to use Postgres for everything
Overall sentiment on “Postgres for everything”
- Many like the idea of starting with a single, familiar stack (Postgres) to move fast and avoid overengineering.
- Common heuristic: use Postgres for as much as possible early; only diversify when there is a concrete bottleneck.
- Others argue strongly that “Postgres for everything” becomes harmful in professional/large-scale contexts, especially when it’s used as app server, queue, and API surface.
Single database vs. multiple services / teams
- Concern: one shared DB for 100+ engineers leads to “database as the API,” tight coupling, risky migrations, and large outage blast radius.
- Counterpoint: those issues can be managed with process (migration guidelines, reviews, views as API layer, separate logical DBs) and are often preferable to premature microservices.
- Another view: organizational boundaries (separate services and databases) reduce coordination costs and improve team independence.
- Some report huge business success with a Postgres-centric monolith, then gradually introducing specialized systems only once revenue/scale justified it.
Using Postgres beyond OLTP (queues, search, streams, HTML, etc.)
- People mention Postgres-backed job queues using SKIP LOCKED; acknowledged tradeoffs with mixed-duration jobs and vacuum/bloat.
- Vector search + pgvector are seen as “no brainer” for many use cases.
- Full-text search is considered powerful but less user-friendly than Elastic; some homegrown BM25 solutions and n-gram workarounds are shared.
- Generating HTML/UI directly from Postgres is viewed skeptically, though there are examples in other ecosystems.
Alternatives and limits of “one tool”
- Several commenters recommend purpose-built tools for:
- Analytics / time-series: ClickHouse, Snowflake, etc.
- Caching: Redis-like systems (though there’s interest in Postgres caches).
- Queues / streams: Kafka, NATS, dedicated MQs.
- Argument against extensions: they often lag core Postgres in feature coverage and may hit scaling limits sooner.
- Counterargument: extensions are how Postgres evolves; adding analytical/AI/distributed capabilities is valuable despite tradeoffs.
Other technical topics raised
- Graph databases: Apache AGE is described as immature; Neo4j preferred but licensing is an issue; some question when a graph DB is truly justified.
- Bitemporal data: many claim it can be modeled cleanly in vanilla Postgres; others argue large-scale bitemporality justifies specialized databases.
- Local-first: tools like ElectricSQL and Postgres–SQLite sync solutions are mentioned as bridges.
- Operationally: self-hosting tips (backups via pg_dump/pgBackRest, tuning guides, HA via Patroni) and caveats about version upgrades and administration overhead.