PlanetScale for Postgres is now GA
Postgres behavior & index/vacuum concerns
- Discussion on index bloat for high-insert workloads: PlanetScale doesn’t do special tuning yet but has automated bloat detection and relies on ample resources/Metal to help autovacuum.
- A Postgres B-tree contributor notes that modern releases handle high-insert patterns well and asks for concrete repros; clarifies that indexes cannot shrink at the file level without REINDEX/VACUUM FULL, only reuse pages internally.
- Clarification that VACUUM truncates table heaps in some cases but not indexes; relation truncation can be disabled when disruptive.
- XID wraparound and autovacuum tuning are acknowledged as real issues for heavy workloads, but details for PlanetScale’s policies are not deeply discussed.
Postgres vs MySQL for greenfield projects
- Many argue Postgres is the default choice today: richer features, extensions, better standards compliance, and wide ecosystem adoption.
- Reasons given to still choose MySQL: long-standing operational expertise, historical “big web” use, better documented internals/locking, InnoDB’s direct I/O and page reuse patterns, mature sharding via Vitess, and better behavior for some extreme UPDATE-heavy workloads.
- Large-scale hybrid OLAP/OLTP on Postgres is described as trickier due to replication-conflict settings (
max_standby_streaming_delay,hot_standby_feedback). - Several participants still say they would usually start new products on managed Postgres, keeping MySQL as an escape hatch for specific hyperscale patterns.
PlanetScale Postgres architecture & performance
- Core differentiator is “Metal”: Postgres on instances with local NVMe (on AWS/GCP), not network-attached EBS/PD. Claim: orders-of-magnitude lower I/O latency, “unlimited IOPS” in the sense that CPU becomes the bottleneck before disk IOPS.
- Durability is provided via replication across three nodes/AZs; writes are acknowledged only after they are durably logged on at least two nodes (“semi-synchronous” style). Local NVMe is treated as ephemeral; nodes are routinely rebuilt from backups/WAL.
- Benchmarks versus Aurora and Supabase show lower latency and higher throughput on relatively modest hardware; some skepticism about “unlimited IOPS” marketing and smallish benchmark sizes.
Scaling, sharding & Neki
- Current GA offering is single-primary Postgres with automatic failover and strong vertical scaling via Metal; horizontal write scaling still means sharding.
- A separate project, Neki (“Vitess for Postgres”), will provide sharding/distribution; it is inspired by Vitess but is a new codebase. Migration to Neki is intended to be as online and easy as possible, though app changes for sharding may be required.
- Questions raised about competition with other Postgres sharding systems (Citus, Multigres); no detailed comparison yet.
Feature set & compatibility
- PlanetScale confirms Postgres foreign keys are fully supported; older Vitess/MySQL restrictions are historical.
- Postgres extensions are supported, with a published allowlist; specific OLAP/columnar/vector/duckdb-style integrations are not fully detailed in the thread.
- PlanetScale uses “shared nothing” logical/streaming replication, in contrast to Aurora’s storage-level replication; this makes replica lag a consideration but avoids Aurora-specific constraints (
max_standby_streaming_delaycaps, SAN semantics).
Positioning vs Aurora, RDS, Supabase
- Compared to Aurora/RDS: main claims are better price/performance, NVMe instead of EBS, and stronger operational focus (uptime, support). Several users report Aurora being dramatically more expensive for similar capacity.
- Compared to Supabase: PlanetScale positions itself as an enterprise-grade, performance-first Postgres (and Vitess) provider rather than a full backend-as-a-service. Benchmarks vs Supabase are referenced; some migrations from Supabase supposedly reduced cost.
- Some comments note that if one already has deep AWS integration, the benefit over Aurora/RDS is more about performance and cost than functionality.
Latency & network placement
- Concern: managed DBs “on the internet” add latency for OLTP. Responses:
- Databases run in AWS/GCP regions/AZs; colocating app and DB in the same region/AZ keeps latencies low.
- Long-lived TLS connections, keepalives, and efficient clients reduce per-query overhead; for many workloads, database CPU/IO limits are hit before network latency dominates.
- For very high-frequency, ultra-low-latency transactional systems, careful region/AZ placement still matters and remote DBs may be a bottleneck.
Pricing, trials & target audience
- Website criticized for not clearly surfacing what PlanetScale is and how to try it; some find the messaging fluffy, others find it clear (“fastest cloud databases, NVMe-backed, Vitess+Postgres”).
- PlanetScale emphasizes being a B2B/high-performance provider; no free hobby tier anymore. Entry pricing is around $39/month with usage-based billing and no long-term commitment.
- Debate on whether B2B products should have free trials; some note pilots via sales are more typical, others argue explicit trial paths would help evaluation.
User experiences & migrations
- Multiple users report positive early-access/beta use: strong performance, stability, quick and engaged support (including during off-hours incidents).
- One migration case from Heroku Postgres notes smoother operations and more control over IOPS/storage, with one complication caused by PGBouncer/Hasura behavior rather than PlanetScale itself.
- Interest in migrating from Aurora, Supabase, and Heroku to PlanetScale, mainly for cost and performance; details of migration tooling and thresholds where it “pays off” remain workload-dependent and not fully specified.