Anubis saved our websites from a DDoS attack
Nature of the traffic and traditional mitigations
- Some argue the incident looks more like aggressive crawling than a classic, volumetric DDoS; volumetric attacks (tens of Gbps) require upstream/network-side mitigation, not Anubis.
- Others note residential-proxy botnets (tens of thousands of IPs) make simple IP-based rate limiting ineffective; residential “proxy SDKs” embedded in apps were mentioned as a major source of such traffic and hard to regulate.
- Suggestions: cache everything possible, keep dynamic endpoints minimal and explicitly rate‑limited by URL, or even disable certain expensive endpoints under load.
What Anubis does and why it worked here
- Anubis sits in front of the site and forces a proof‑of‑work (PoW) challenge via JavaScript; once solved, a cookie lets clients through for a period.
- Commenters think it helped mainly because most botnets don’t run JS and were just hitting expensive URLs with curl/wget-like clients.
- Some point out that PoW shields and JS challenges have existed for years (Cloudflare, PoW‑Shield, haproxy‑protection, Hashcash); Anubis’ appeal is packaging, OSS licensing, ease of deployment, and good timing around AI-scraper frustration.
- A few stress PoW alone doesn’t stop more sophisticated actors using headless browsers and proxies; you eventually need heuristics, fingerprinting, and IP reputation.
User experience, privacy, and nonstandard clients
- Anubis is widely seen as less hostile than Cloudflare’s captchas, especially for non-mainstream browsers and users behind VPNs/adblockers.
- But it requires JS and cookies: users with temporary containers or cookie blocking hit the challenge repeatedly; some report endless reload loops with cookies disabled.
- Arch Wiki’s adoption drew criticism because people often access it from broken systems or minimal browsers; Anubis makes that harder.
- There is interest in non‑JS / protocol‑level PoW (e.g., standardized HTTP headers) and tools like “checkpoint” that can work without JS.
Bots, user agents, and AI scrapers
- Anubis ships with user‑agent denylists (including many AI crawlers), which some say punishes “honest” bots and rewards those that lie about UA.
- Defenders reply that “honest” AI scrapers still impose costs without returning traffic, unlike traditional search engines, so blocking them is reasonable.
- Fingerprinting and shared reputation (“hivemind”) are being explored, though residential proxies and privacy concerns make this tricky.
Branding, licensing, and the open‑source “social contract”
- The animated Anubis mascot divides opinion: some love the playfulness; others say it’s too unprofessional for client‑facing sites.
- The project is MIT‑licensed, but the maintainer “asks (but does not demand)” that the character not be removed, offering a paid white‑label option.
- This sparked a big debate:
- One side sees removing the logo without paying as socially unethical, exploiting prosocial work while ignoring explicit wishes.
- The other side argues the MIT license explicitly allows modification; adding extra‑legal “social” restrictions or shaming users is itself problematic.
- Some view the model (free version with cute branding, paid neutral branding plus extra features/reputation DB) as a clever way to fund OSS; others worry such social pressure contributes to maintainer burnout or blurs the line between “free software” and source‑available business models.
Comparisons and alternatives
- Alternatives mentioned: Cloudflare challenges, mCaptcha, PoW‑Shield, haproxy‑protection, nginx PoW modules, “checkpoint”, and traditional rate‑limiting + fail2ban/mod_evasive.
- Several commenters complain large commercial WAF/anti‑bot suites (Cloudflare, Akamai, etc.) over‑block, rely on opaque fingerprinting, and reinforce browser monoculture; Anubis is praised for (currently) working on many niche browsers.
- Skeptics note that as Anubis adds more advanced JS and reputation/fingerprinting, it risks drifting toward the same complexity and false‑positive issues.