Dead Internet Theory

Stylistic “AI tells” and the em‑dash wars

  • Many discuss using phrases like “you’re absolutely right” and em‑dashes as signals of LLM output.
  • Others push back: these are long‑standing human habits, especially among typography nerds, writers, and some regional dialects.
  • Several note that trying to avoid looking like AI (e.g., dropping em‑dashes) is both futile and corrosive; anything humans stop doing, models will simply stop mimicking.
  • Consensus: style cues can be weak heuristics at best, not reliable proofs of machine authorship.

Bots, Reddit, and ad‑driven decay

  • Multiple comments argue Reddit has been “ruined” by bots, low‑effort posts, algorithmic feeds ignoring subscriptions, and API changes that weakened moderation.
  • Some suspect platforms tolerate or even encourage bot traffic to inflate engagement and ad metrics; others counter that advertisers have tracking and KPIs, so pure bot inflation would be unsustainable at scale.
  • There’s disagreement on how much of Reddit is actually bots versus low‑effort humans.

Dead internet vs dark forest / boutique internet

  • Some fear a future where AI slop and paywalls “eat” the public web, leaving innovation and real conversation only behind gated, corporate, or elite spaces.
  • Others prefer a “dark forest” model: small, semi‑hidden pockets of human activity (invite‑only forums, niche communities, boutique sites) amid a sea of automated sludge.
  • Older patterns—manual directories, blogrolls, RSS, curated lists—are proposed as ways to find real people again.

Social media, forums, and scale

  • Long argument over whether HN is “social media” or merely a forum; broader point is that term “social media” has become blurry.
  • Discord, Matrix, WhatsApp, private forums and small paid communities are cited as surviving examples of pre‑algorithmic, relationship‑based interaction, though subject to eventual “enshittification.”

AI slop, rage bait, and misinformation

  • Widespread worry about AI‑generated videos (e.g., fake “racist cop” clips) and rage‑bait content optimized for engagement.
  • Some note this continues an old pattern (selective framing, hoaxes, propaganda) but at far greater scale and lower cost.
  • Several expect growing difficulty in verifying reality, predicting more cynicism and possibly a cultural shift toward dense, high‑stakes writing and trusted reputational filters.

Verification, provenance, and detection

  • Technical ideas: watermarking (C2PA, SynthID), latency‑based geolocation to fight phone farms, biometric or ID‑based “human” verification, AI‑banned instances (e.g., some Mastodon servers).
  • Skeptics point out that watermarks can be stripped or routed around, VPNs and relays can spoof location, and strict verification threatens privacy and creates new abuse vectors.
  • Strong view: recognition may remain easier than generation, but no automated detector will be foolproof.

Open source, AI use, and authenticity norms

  • One GitHub project, promoted as “production ready,” is debated as obviously AI‑assisted despite author denials; readers report feeling gaslit.
  • Some argue there is no ethical duty to disclose tools used; others say misrepresenting hand‑authorship, especially when quality claims are high, undermines trust.
  • Broader unease that cheap LLM‑aided “vibe code” and SEO‑style libraries will pollute ecosystems, forcing developers to audit dependencies much more carefully.

Emotional impact and shifting baselines

  • Several humans report being falsely accused of being bots, finding it demoralizing given the effort they put into careful writing.
  • Others note that as AI output becomes ubiquitous, even normal literacy and good typography are treated with suspicion.
  • Underneath the theory talk is a shared sense of loss: long‑form, sincere, handcrafted contributions no longer function as “proof‑of‑work” for human thought.