Re: My AI skeptic friends are all nuts

Scope of Skepticism vs. Hype

  • Many commenters say “skeptic” is the wrong label: most critics accept that LLMs are powerful and useful, but dispute grand claims (AGI soon, total job replacement, “learn to code is obsolete”).
  • Some argue skeptics often haven’t seriously tried current tools and are stuck on outdated impressions.
  • Others counter that skepticism is simply withholding belief without evidence, and hype far outstrips demonstrated capability.

Education, Homework, and “Dead Classrooms”

  • Strong concern over schools requiring LLM use, and teachers using LLMs to grade, leading to “LLM writes, LLM grades” situations.
  • Critics worry this undermines development of reasoning, writing, and problem‑solving skills, and amplifies a digital divide (wealthy students get better tools).
  • Some argue essay-writing is mostly busywork; others insist it’s core to organizing thoughts and learning logic.
  • Several say take‑home assignments and homework are effectively “dead” as honest assessment tools in an LLM world; some welcome the death of homework, others argue independent practice is essential.

Skill Atrophy and Critical Thinking

  • Multiple anecdotes of experienced devs feeling unable to work without LLMs, or forgetting basic patterns they used to know.
  • One side: atrophy of unused skills is fine—if you truly don’t need them, no loss.
  • Opposing side: coding and critical thinking are central job skills; if you can’t perform or verify them without a tool, you’re dangerously dependent, especially for future generations who never built the baseline.

Analogies to Past Technologies

  • Supporters compare fears to earlier panics over calculators, Google, IDEs, higher-level languages; abstraction and tool use are seen as the normal trajectory.
  • Critics respond that LLMs uniquely offload cognition, not just manual or syntactic work, and may hollow out thinking rather than just low-level implementation.

Socio‑Political and Economic Concerns

  • Some focus less on code quality and more on systemic effects: accelerated concentration of power and wealth, AI‑driven bureaucracy, erosion of human oversight and recourse, risk to democracy and social fabric.

Data Quality and Self‑Training

  • Brief debate on “AI slop” poisoning training data: worries that models will degrade as they train on their own output.
  • Others argue ranking, curation, and selection for popularity/quality can still sustain or improve models, though this is acknowledged as nontrivial and imperfect.

LLMs in Everyday Software Work

  • Several note that a large fraction of programming is routine “blue‑collar” glue work where LLMs and codegen shine and risks are lower.
  • Others insist even routine code must be reasoned about by humans; they distrust any generated code that hasn’t been deeply understood.

AI Skepticism as Politics and Research Strategy

  • One view frames strong AI skepticism as a partly political stance; skeptics reply concerns about AI’s downsides are broad and cross‑ideological.
  • An ML researcher argues the real issue isn’t whether LLMs work, but that almost all funding and attention are being funneled into one paradigm (scaling transformers), crowding out alternative approaches and creating a fragile “all eggs in one basket” situation.