Migrating to Postgres
JSON, columnar storage, and query design
- Several comments criticize heavy use of
json_*in queries, noting planner issues and weaker performance vs properly modeled columns. - Some defend JSON as a pragmatic fit for flexible, customer-specific attributes, especially when schema was decided early and is now hard to change.
- There’s interest in easier columnar options (e.g., AlloyDB, pg_mooncake) to keep OLTP writes in Postgres/MySQL while offloading analytic-style scans to columnstores.
Multi‑region, distributed DBs, and HA
- Many argue most apps don’t need multi‑region writes or distributed SQL; single Postgres with replicas covers 99% of use cases.
- Others say global read replicas can be a real win for latency if you have users worldwide, but warn about replica lag and operational complexity.
- Multi‑master Postgres options exist but are described as “nightmare‑level” operationally.
ORMs, Prisma, and query performance
- Strong criticism of Prisma: historically no JOINs, app‑side joins via an extra service, some operations (e.g., DISTINCT) done in memory, and poor visibility into generated SQL.
- Supporters note Prisma now has preview JOIN modes and a move away from the Rust sidecar, plus type‑safe raw SQL/TypedSQL as escape hatches.
- Broader debate: some view ORMs as technical debt that hides SQL and leads to bad patterns (
SELECT *, N+1, non‑normalized schemas); others find them huge productivity wins if used for simple CRUD and dropped for complex queries.
Normalization, indexing, and schema choices
- One camp emphasizes strict normalization and avoiding low‑cardinality text columns to save memory; another says normalization vs denormalization usually matters less than indexing and query patterns.
- Materialized views are proposed as a compromise (normalized writes, denormalized reads), but Postgres’ lack of automatic refresh is noted.
CockroachDB optimizer and “unused” indexes
- There’s speculation that CockroachDB’s “unused index” flags might be due to zigzag joins using other indexes instead of obvious covering ones, leading teams to misinterpret index usage.
Postgres scale, cost, and overengineering
- Multiple practitioners report single Postgres/MySQL instances happily handling hundreds of millions to tens of billions of rows with proper indexing, partitioning, and hardware.
- Many see the article’s ~100M‑row table and mid‑six‑figure distributed DB bill as a textbook case of premature adoption of “web‑scale” tech.
Postgres vs MySQL and alternatives
- Postgres gets praise for features, extensibility, and tooling; MySQL is defended as rock‑solid and simpler to operate for basic OLTP.
- Specialized systems (ClickHouse, TimescaleDB, Spanner‑like DBs) are seen as appropriate for specific high‑volume analytics or time‑series scenarios, often fed from Postgres via CDC.
“Postgres as default” and migrations
- Many note a pattern: lots of “migrating to Postgres” stories, few in the opposite direction, though examples exist (e.g., to MySQL, ClickHouse, ADX, SingleStore) for org‑specific reasons.
- Consensus vibe: start with Postgres unless you clearly know why you need something else; moving from Postgres to a specialized system later is easier than unwinding an exotic choice.
GDPR and multi‑region requirement (unclear)
- One line in the article about GDPR “mandating” multi‑region setups is questioned; commenters ask for clarification and consider that claim unclear from a regulatory standpoint.