OpenFreeMap survived 100k requests per second
Cloudflare vs origin load
- Several commenters note that Cloudflare served ~99% of traffic, implying the origin only handled ~1,000 rps while the CDN absorbed ~99,000 rps.
- Others push back on dismissing this as “just Cloudflare surviving”: designing URL paths and cache headers to achieve a 99% hit rate is seen as real engineering work, not an accident.
Were the requests “real users” or bots?
- The blog’s claim that usage was largely scripted is questioned: people say map-art fans often “nolife” exploration for hours, which can generate thousands of tile requests.
- One commenter measured 500 tile requests in 2–3 minutes of casual scrolling, arguing the author’s “10–20 requests per user” baseline fits embedded, non-interactive maps, not active exploration.
- Others counter with math:
3B requests / 2M users (1,500 requests/user) and /r/place‑style dynamics strongly suggest significant automation, even if not exclusively bots.
Blame, entitlement, and expectations of a free API
- There’s a sharp split on whether it was fair to criticize wplace:
- One side: if you publicly advertise “no limits,” “no registration,” and “commercial use allowed,” you shouldn’t blame users for heavy usage; that’s akin to honoring a bulk hamburger order at the posted price.
- The other side: hammering a volunteer, no‑SLA service at 100k rps is effectively stress‑testing it; expecting the operator to scale “to infinity” on their own dime is seen as entitled.
- Some argue the operator handled it well by blocking via referrer, reaching out, and suggesting self‑hosting while keeping the free public instance available.
Rate limiting and controls
- Suggestions include per‑IP rate limits (e.g., 100 req/min) or JA3/JA4 fingerprinting, but the maintainer prefers referrer‑based controls so they can talk to site owners and steer heavy users to self‑hosting.
- Others note referrer‑based rate limits match the real control point (the embedding site) better than per‑user limits for distributed clients.
Infrastructure, caching, and costs
- Debate over why wplace didn’t cache tiles themselves: some call it “laziness,” others cite priorities and the reality of a fun side‑project that suddenly went viral.
- 56 Gbit/s is viewed by some as “insane” and by others as feasible on a few well‑provisioned servers; consensus is that bandwidth cost, not raw server capability, is the main constraint for a free service.
- Long subthread on nginx tuning: file‑descriptor limits,
open_file_cachesize,multi_accept, and whether FD caching is even necessary with modern NVMe and OS caches.
Alternative architectures
- Multiple people suggest PMTiles + CDN as a simpler model (single large static file, range requests), noting comparable performance in small benchmarks.
- Others ask why not run entirely on Cloudflare (Workers, R2, Cache Reserve); responses highlight migration effort and the risk of variable, usage‑based bills vs predictable Hetzner dedicated servers.