Why do we tell ourselves scary stories about AI?

Fear as Marketing and Power Play

  • Many see AI companies deliberately hyping existential risk and job loss as a PR tactic.
  • Fear sells to CEOs and governments: “this is so powerful it can replace workers / decide wars, so you must buy from us and let us set the rules.”
  • Several comments frame this as regulatory capture: deregulate big US firms in the name of “beating China,” while tightly regulating competitors and open source.
  • Others argue leaders are genuine believers in the tech’s risks and are being relatively honest, even if the messaging is clumsy.

Actual vs Imagined Dangers

  • A recurring view: current systems are already dangerous in concrete ways—prompt injection, security failures, deepfakes, data leaks, brittle “agents” that fail catastrophically.
  • Some think talk of monomaniacal, world-ending AGI is overblown or premature; the more plausible near-term risk is AI as a tool to concentrate wealth and power.
  • Others insist long-term existential risks should still be taken seriously even if decades away.

Consciousness, Agency, and “Scary Stories”

  • Debate over whether today’s models are or could be conscious:
    • Some say clearly not—we fully understand they’re just statistical language models.
    • Others note theories of consciousness disagree, so certainty that “they’re not conscious” is unjustified.
  • Several emphasize that consciousness isn’t the core issue; non-conscious systems can still lie, deceive, plan, and cause harm.
  • One thread connects AI fear to longstanding human anxieties about artificial beings, sociopathy, and entities immune to social pressure.

Economic and Labor Concerns

  • Multiple comments report real productivity gains: models can now do everything from small fixes to full feature development.
  • Others note early signs of labor displacement, especially junior and repetitive roles (e.g., some IT, translation, art).
  • Disagreement over scale: some see a vast majority of jobs as repetitive and at risk; others argue all human work has irreducible creativity and most people will not lose jobs, or may even benefit.

Historical and Social Framing

  • Comparisons to past panics over “electronic brains,” trains, electricity, games, and the internet.
  • Social media is seen as amplifying doom narratives because fear and outrage drive engagement.
  • Several argue our real fear is not “AI itself” but the existing economic system and elites that will wield it.