A thought on JavaScript "proof of work" anti-scraper systems

Purpose of JS PoW / Anubis

  • Many comments frame JS proof-of-work (PoW) systems like Anubis primarily as DDoS mitigation against LLM and other aggressive scrapers, not as “AI rights management.”
  • Goal is to raise the economic cost of bulk scraping: turning a cheap HTTP GET into something that burns noticeable compute across large fleets, while staying mostly invisible to normal users.
  • Some see this as analogous to Hashcash-style anti-DoS: stateless, simple, and shifting some cost to the client.

Effectiveness and Limits Against Scrapers

  • Skeptics argue major scrapers already execute JS and can adapt: run real browsers, keep cookies, reverse-engineer PoW flows, and even GPU-accelerate PoW solving.
  • Others counter that even modest per-request friction scales painfully at “tens of thousands of requests per minute,” so scraping operations will be forced to be more selective or efficient.
  • There are concrete reports of LLM/large-company scrapers hammering sites, ignoring robots.txt, redirects, and IP blocks, sometimes to the point of practical DDoS.
  • Some insist no technical anti-scraper will truly “win”; at best, PoW shifts cost and buys time.

Impact on Users, Devices, and the Web

  • Strong concern about degrading UX: extra seconds of load time, especially on old phones or small devices, and general “enshittification” of the web.
  • Critics point out PoW punishes honest, low-power users more than well-funded or compromised infrastructures.
  • Others argue that tuned correctly, PoW can be negligible for humans but ruinous at bot scale; disagreement remains whether this is realistically achievable.
  • Environmental worries: PoW and cryptomining burn energy for no user benefit, on top of already-bloated JS and ad tech.

Cryptomining and “Useful Work” Variants

  • Several suggest swapping artificial PoW for Monero or other mining, turning scraper effort into publisher revenue or “micropayments.”
  • Pushback: miners or bots can keep winning hashes; browser hardware is terrible for profitable mining; prior art (Coinhive) showed tiny payouts and huge abuse.
  • “Useful” PoW (protein folding, primes, etc.) is considered impractical: needs large datasets, complex coordination, and hard-to-verify partial work.

Arms Race, Attestation, and Centralization

  • Some foresee browser vendors using their installed base to scrape on behalf of big players; browser engines already blur lines between “user agent” and “corporate scraper.”
  • Hardware-based attestation/token systems are mentioned as an alternative to PoW, but would effectively lock out Linux, rooted, or older devices and concentrate power in big platforms.
  • Others foresee login walls and walled gardens as the real “endgame” defense, eroding anonymity and the open web but aligning with economic realities.

Scrapers’ and Publishers’ Perspectives

  • People doing small-scale, legitimate scraping (e.g., personal frontends, OER aggregation) dislike PoW walls, especially when content is open-licensed or explicitly allowed.
  • Some argue the real problem is poorly behaved corporate bots externalizing costs onto small sites; PoW is self-defense, not hostility to openness.
  • There are calls for better distribution channels (IPFS, APIs, push-based feeds) so publishers can share data without being hammered by generic HTTP crawlers.