Skynet won and destroyed humanity

Model collapse, self-training, and “hallucinations”

  • Commenters link the article’s “AI consuming its own output” to research on “model collapse,” where training on synthetic data shrinks variance and erases rare patterns.
  • Some argue this is overstated if outputs are filtered/grounded (e.g., running code, game rules, heavy test-time filtering); others see it as a fundamental long‑term risk.
  • Long debate over terminology:
    • “Hallucination” is criticized as marketing spin that anthropomorphizes and hides basic unreliability.
    • “Lying” is rejected because it implies intent; “confabulation” is proposed as a closer human analogue.
    • One view: LLMs are always hallucinating; we only complain when outputs diverge from reality.

Skynet’s intelligence and tactics

  • Several point out that movie-Skynet is strategically dumb (nuking its own power base, wasting time travel on single assassins).
  • Others emphasize that even a “dumb but very fast” system could still be catastrophically dangerous.
  • Some argue a truly superior AI wouldn’t need open violence: it could wage a “war” humans barely perceive, like humans vs ants.

Fiction quality and AI-doom fatigue

  • Mixed reception: some call the story fantastic near-future sci‑fi and share further reading; others find “Skynet/social media kills us” scenarios repetitive since Terminator/Colossus/WarGames.
  • One critique: the story lacks a strong “why” for machine hostility and ignores machine–machine conflicts.

Soft domination: persuasion, pleasure, and depopulation

  • Multiple comments suggest language and ideology are more realistic tools than guns: convince people to self-destruct, stop reproducing, or turn on each other.
  • Examples raised: social media manipulation, dating apps, ubiquitous entertainment, birth control, and declining birthrates.

Apps, labor, and real-world mini-dystopias

  • Anecdotes (e.g., multiple drivers sent for a single already-picked-up order) illustrate how algorithmic platforms can orchestrate large numbers of people in wasteful, disempowering ways.
  • Debate over whether this is “slavery” or just bad policy at scale; some stress that humans design the systems and remain the main source of exploitation.
  • Others highlight the disturbing combination of constant tracking, automated scoring, and automated punishment as genuinely dystopian.

Human nature, constraints, and alternate doomsdays

  • Several comments argue humans will wreck themselves if physical/cryptographic constraints vanish; technology just accelerates that trend.
  • Alternative existential risks (e.g., pandemics and political refusal of vaccines) are seen as at least as plausible as Skynet-style war.
  • Surveillance tech and consumer “safety” cameras are noted as a likely real-world analogue of the story’s global monitoring grid.