The Cloudflare outage might be a good thing
Debate over “Nuclear‑Resilient Internet” Myth
- Several comments challenge the article’s claim that the internet was “designed for decentralisation to survive nuclear war.”
- One side cites official ARPANET history: initial goals were academic resource sharing, not command‑and‑control under attack.
- Others argue funding motivations differed from stated research goals: packet switching was explicitly developed for nuclear‑survivable comms, and ARPANET rode that wave, even if researchers weren’t told.
- Consensus: survivability influenced design thinking, but “built for nuclear war” as a simple origin story is misleading.
Will the Cloudflare Outage Change Anything?
- Many think it won’t: everyone already knows about centralization (Cloudflare, AWS, Gmail, GitHub), but outages haven’t driven real diversification.
- Internally, providers will fix bugs and harden systems; externally, most customers will stay because switching and multi‑cloud are expensive.
- Some argue customers don’t “punish” downtime the way they do power grid failures, so incentives remain weak.
Centralization vs Decentralization
- Pro‑centralization view: big providers are far more redundant and reliable than most self‑hosted setups; small providers have more frequent, “chronic” issues.
- Counterpoint: monoculture creates correlated failures and concentrates power—easier censorship, surveillance, political pressure, and catastrophic single events.
- Several note that centralization offers CYA: if AWS/Cloudflare fail, it’s seen as an “act of God,” diffusing blame.
Redundancy, Risk, and Cost
- People stress cost–benefit: most businesses accept a few hours’ downtime every year rather than pay for multi‑region/multi‑cloud and alternative DNS/CDN paths.
- Some SRE‑minded commenters advocate “have backup plans for your backup plans,” but others say that’s financially unrealistic except for the most critical systems.
- There’s concern that complexity (microservices, k8s on hyperscalers) increases failure modes even as redundancy increases.
Self‑Hosting, DDoS, and Bots
- Experiences with self‑hosting email/web are mixed: some report painless FreeBSD/mail‑in‑a‑box setups; others gave up due to deliverability issues and blacklists.
- Many see Cloudflare’s main value in DDoS mitigation and bot filtering; small hosts and VPS providers can’t match it, and botnets increasingly use residential IPs.
- A minority argues DDoS is rare for most sites and that serving extra bot traffic or using lighter‑weight defenses can be acceptable.
Geofencing, Openness, and Security
- One practitioner wants gas‑station air‑pump systems accessible only from the US, calling that “literally impossible” due to VPNs and proxies.
- Others push back: you can reduce, not eliminate, foreign access (GeoIP, VPN/proxy lists, client certs, zero‑trust), but 0% false positives/negatives is impossible.
- This sparks meta‑discussion about rising “anti‑openness” attitudes (geo‑blocks, ID/age checks) versus the desire to limit exposure to state‑level attackers.
Regulation and “Software Building Codes”
- Some argue the internet is still technically decentralized and the real problem is lack of regulation: companies can build brittle, critical systems with no safety standards.
- Proposal: treat large‑scale digital infrastructure more like buildings and power grids, with mandatory “software building codes,” especially for banking and healthcare, where simultaneous outages are society‑level risks.
Real‑World Impact and Attitudes to Outages
- Examples include interrupted medical imaging (RTG/X‑ray), POS failures, and missed high‑value ad campaigns.
- Some commenters accept occasional major outages as the price of efficiency (“stuff breaks, design accordingly”); others maintain that as more daily life depends on online services, correlated failures become increasingly dangerous.