Show HN: I made a down detector for down detector

Humor, recursion, and “who watches the watchers”

  • Thread is dominated by jokes about infinite recursion: down detector for down detector “all the way down,” “N‑down detector,” and shorthand like downdetectorsx5.com.
  • People riff on “Quis custodiet ipsos custodes?” and Watchmen, plus classic “Yo dawg, I heard you like down detectors” memes.
  • Several gag domains are registered or checked, running into DNS label-length limits, prompting suggestions for more compact notation.
  • HN itself is jokingly called the “true down detector.”

How the site actually works (or doesn’t)

  • Users inspect the client code and find it generates deterministic mock data: no real checks, just pseudo-random response times and fixed “up” statuses.
  • This is seen as in keeping with the “shitpost” / novelty nature of the project.
  • Some ask how a serious detector should handle partial failures (e.g., Cloudflare’s human-verification page breaking while the origin still returns HTTP 200).
  • Others link external uptime checkers monitoring the site, effectively creating a real meta‑detector chain.

Redundancy, distributed detection, and graphs

  • Multiple comments suggest a second (or looping) instance to monitor the first, leading to ideas about directed graphs of monitors and distributed heartbeat networks.
  • One commenter outlines a distributed design: many nodes monitoring each other, clusters going silent as a signal of broader failure, with self‑healing to maintain resilience.
  • Another argues that it’s fine for DownDetector to monitor the meta‑detector, as long as they’re on different stacks/regions.

Cloudflare, CDNs, and infrastructure choices

  • The project appears to use Cloudflare DNS and AWS hosting; people note the irony that if major infra is down, this site likely is too.
  • Debate over whether a static status page genuinely needs a CDN:
    • One side: static + CDN is ideal for sudden traffic spikes and cheaper than over‑provisioned compute.
    • Other side: for basic static HTML, a CDN may be overkill if the origin is robust.

Centralization vs smaller / regional providers

  • A long subthread discusses moving from US hyperscalers (Cloudflare, AWS) to European providers (Bunny.net, Hetzner, Scaleway, Infomaniak) for reliability, sovereignty, and independence.
  • Some report zero downtime with these alternatives; others share concrete Hetzner incidents and note that EU providers also have outages.
  • Disagreement over reliability incentives:
    • Pro‑small: fewer services, less complexity, stronger incentive not to fail.
    • Skeptical: smaller players may use lower‑tier datacenters; their outages just don’t make headlines.
  • Separate debate over cloud vs on‑prem: some say cloud is overused and on‑prem can be cheaper and more sovereign; others argue replicating cloud capabilities in‑house is prohibitively complex.
  • Cloudflare and AWS outages (including a Rust unwrap mention and Crowdstrike’s past incident) are cited to question how much such events actually affect customer churn or stock price.

Related tools and alternatives

  • People mention other monitoring tools and services: uptime projects like hostbeat.info, Datadog’s updog.ai, and EU‑centric transactional email/self‑hosted options (e.g., Sweego, MailPace, Hyvor Relay).
  • Some readers say this thread makes them feel better about hacking on their own monitoring tools despite existing mature competitors.