LLMs are a 400-year-long confidence trick

Scope of “AI safety” and regulatory capture

  • Some argue prominent “AI safety” groups focus on speculative extinction risks to scare regulators, paving the way for centralized control and an oligopoly of “trusted” big firms.
  • Others counter that small advocacy orgs have limited resources, do address current harms (deepfakes, wellbeing, etc.), and that structural fixes (lawsuits, regulation) are hard.
  • Debate over whether these orgs meaningfully tackle pollution, CSAM, and harassment versus mostly promoting “intelligence explosion”–style scenarios.

Intelligence vs. utility

  • Long back-and-forth over whether LLMs are “intelligent” or merely imitating intelligence.
  • Some point to complex practical tasks (debugging systems, building full stacks, using custom DSLs) as evidence of real intelligence.
  • Others say it’s pattern-matching over human output, will degrade without new data, and is best understood as a powerful calculator or power tool, not as a mind.

Hype, con, and marketing

  • Many agree LLMs are useful but argue they’re sold as miracle cures (curing cancer, ending poverty, etc.), which fits the definition of a con: selling more than you deliver.
  • Others say a tool that provides real gains, even if overhyped, is not a “confidence trick” in the same sense as Theranos.
  • Several commenters stress the article is about marketing narratives and fear-based “P(doom)” messaging, not about raw technical capability.

Productivity and real-world impact

  • Individual engineers report dramatic productivity gains (claiming up to “10x”), especially on boilerplate, refactors, and busywork.
  • Contrasting evidence: some companies see measured productivity decreases despite self-reported increases, with harder reviews, more complex code, and skill atrophy.
  • Concerns about hallucinations, lack of online learning, and long-horizon/agentic weaknesses suggest current LLMs may hit a wall; others see no clear “wall” yet.

Social, economic, and cultural costs

  • Worries about environmental impact, hardware scarcity, harassment via deepfakes/CSAM, and displacement of artists and junior engineers.
  • Gaming community cited as particularly hostile to visible generative content, though some say backlash is from a loud minority and AI is already widely used under the hood.
  • Debate over whether society should “vote with wallets” or make collective political choices about acceptable uses and labor displacement.