You are the scariest monster in the woods

Humanity as the real monster

  • Several comments echo the article’s theme using fiction (Disco Elysium, I Am Legend, Station Eleven): to other life, humans are the anomaly—fast‑reproducing, environment‑destroying predators that eat other sentient animals and industrialize it into “cuisine.”
  • Others push back on romanticizing nature: animals fear us mainly because we’re very large and dangerous, not because they “know” we’ll destroy the planet.
  • Some extend the thought to first contact: if aliens don’t consume life for food, our meat‑eating, habitat‑erasing behavior could look like a devouring swarm.

AGI possibility and the nature of intelligence

  • Big split on whether AGI is even possible. One camp treats human cognition as a product of evolution and so in‑principle reproducible in silicon; another suggests there may be non‑computable, non‑material aspects (Penrose–Lucas, qualia, intentionality, “soul”-like uniqueness).
  • Disagreements hinge on definitions: is intelligence just advanced pattern‑matching, or does it require true agency, self‑modeling, and non‑symbolic “aboutness” (intentionality)?
  • Some argue the search space for true AGI may be so large and full of dead ends that we never reach it in practice—even if it’s theoretically possible.

Current AI capabilities and agency

  • Many stress that today’s LLMs are stateless predictors: powerful “base models” that lack persistent memory, stable goals, or self‑update during use.
  • Others note that memory layers, RAG, in‑context learning, and RL already give systems limited continuity and goal‑directed behavior; the line between “no agency” and “weak agency” is blurring.
  • A subthread debates trivial agent loops (“while true: sense, think, act”) as proto‑AGI; critics say this ignores catastrophic forgetting, fixed‑point pathologies, and the difficulty of robust, continuous learning.

Humans + AI as force multiplier

  • Broad agreement that near‑term danger is “humans wielding AI”: automating decisions in healthcare, finance, warfare, surveillance, hiring, and benefits, with opaque heuristics and little recourse.
  • Corporations and states are framed as existing “paperclip maximizers” or super‑organisms; AI is likened to new mitochondria that will supercharge their goal‑seeking, not a wholly new kind of monster.
  • Concerns include: large‑scale disinformation, deepfakes, automated cyberattacks, AI‑mediated governance, and economic enshittification where ordinary people deal only with AI front‑ends while real power remains unaccountable.

Human nature, power, and institutions

  • Debate over the article’s bleak line that humans mainly “gain power, enslave, kill, exploit”:
    • Some call this lazy misanthropy, noting most people live ordinary, non‑malicious lives.
    • Others counter that a small minority near power, amplified by technology, drives most large‑scale harm; hence the need for checks and balances, regulation, and distributed power.
  • A recurring idea: stupidity, self‑deception, and mass conformity may be more dangerous than outright evil.

Comparing AI risk to other existential threats

  • Several argue nuclear war and climate change are clearer, nearer risks than AGI, and already almost triggered (e.g., tactical nukes in Ukraine, climate destabilization).
  • Others believe AI/AGI could plausibly be an extinction‑level threat this century, unlike nukes or climate, which are more likely to cause civilizational collapse than full extinction.
  • Many say we can and should worry about multiple risks simultaneously; focusing on AI shouldn’t mean ignoring war, environment, or mundane killers.

Critiques of the article’s framing

  • Multiple commenters see the “AI is just a tool; humans are the real monsters” line as analogous to “guns don’t kill people”: technically true but evasive of the technology’s specific risk profile.
  • The claim that AGI is “impossible” is widely criticized as unserious given that human brains exist within physics; skepticism about LLMs as a route to AGI is seen as reasonable, but impossibility is not.
  • Some feel the piece strawmans AI concerns by implying people fear autonomous tools, when most critics already mean “humans using AI at scale” when they say “AI will kill us.”