ClickHouse raises $350M Series C

Database business & funding round

  • Being a profitable database vendor is described as very hard: long sales cycles, big upfront investment, and a need to lock in large customers during the hype window.
  • Some see the $350M “Series C” as more like a late-stage (D/E) round and commentary that “money isn’t real anymore” reflects perceptions of inflated valuations and oversized rounds.
  • Others note that very large Series B/C rounds exist and that raising despite potential profitability can be rational to attack a huge market faster.

Open source, mission, and commercialization

  • A few commenters feel some “ClickHouse, Inc.” decisions run counter to the original project spirit and have hurt the broader OLAP ecosystem.
  • Others push back: the open-source core is still improving quickly (e.g., usability, new features) and commercializing managed storage / “build-it-for-you” pieces is seen as necessary to sustain development beyond the original Yandex use case.
  • Holding back advanced automation (like fully automatic sharding and shared storage) in the OSS version is viewed by some as a sales funnel, by others as a fair business tradeoff.

Self‑hosting vs managed ClickHouse

  • Several people report years of stable self-hosted clusters, including 50+ node setups; overall it’s considered one of the easier DBs to operate, but with important pitfalls (defaults, SSL, manual sharding).
  • Cloud offering adds closed-source SharedMergeTree over S3 with compute/storage separation and automatic scaling; attractive to teams that don’t want ops overhead.
  • Debate on cost: some argue a colo rack is cheaper than managed cloud after a year; others emphasize enterprises pay for reduced hassle.

What ClickHouse is (OLAP focus)

  • Clarified repeatedly: ClickHouse is an OLAP, columnar analytics database, not an OLTP/Postgres drop-in.
  • Best for large-scale aggregations on append-heavy data (logs, telemetry, royalties, analytics dashboards) with second-level “online” query responses.
  • Internals like MergeTree, bulk inserts, heavy DELETEs, and ordering keys are central; performance tuning often depends on dataset layout, partitioning, and avoiding nullable fields.

Performance, joins, and memory behavior

  • Many users praise it as “insanely fast” and a night-and-day improvement over systems like TimescaleDB for large analytics workloads.
  • Others recount frequent out-of-memory issues, especially around joins and large inserts; one user reports OOMs in ClickHouse, DuckDB, and Polars on modest hardware.
  • Some describe ClickHouse as a “non-linear memory hog” that really wants ≥32GB RAM, though the memory tracker usually aborts queries rather than crashing.
  • Joins are a recurring pain point: several say naive joins can OOM even on powerful machines, and emphasize it’s a columnar analytics engine, not a general relational workhorse.
  • Counterpoints say join performance has improved significantly; with careful schema design, join ordering, and techniques like IN queries, incremental materialized views, and projections, complex workloads with many joins can succeed at scale.

Adoption, use cases, and pricing

  • Multiple commenters report long-term, high-volume production usage (including a linked public blog from a large CDN provider), but stress that “someone must tend to it” at scale.
  • Some say you must understand internals for cost/performance; others argue that’s true for any serious DB.
  • A concern about “only 2k users” of ClickHouse Cloud is rebutted: many companies self-host, and cloud customers likely include large enterprise contracts.
  • Mention that data warehouse ACVs are often far above a few hundred dollars per month; one user cites a $450/month small cloud cluster and others note Snowflake-scale contracts as a reference point.

Sampling, correctness, and analytics philosophy

  • Debate on whether storing 900B+ analytics rows is worthwhile: some advocate sampling or Monte Carlo approximations, others argue certain use cases (e.g. payments, rare-event analytics) require full fidelity.
  • ClickHouse’s native sampling support is highlighted as a way to balance accuracy and performance when exact answers aren’t mandatory.

UX, learning curve, and frustration

  • Several users love ClickHouse’s documentation, performance focus, and low-friction replication from OLTP systems.
  • Others find the SQL dialect, operational model, and tools (e.g., ZooKeeper in some setups) unintuitive or filled with “footguns,” especially if approached with a pure Postgres/MySQL mindset.
  • One commenter, stuck with ClickHouse in production, would prefer Postgres for their scale but cannot justify a migration to prove it.

Miscellaneous

  • A lighthearted subthread critiques the wrinkled shirts in a team photo; someone involved explains they’d just pulled new swag out of a shipping box for a spontaneous shoot, chalked up to “startup life.”