I think I'm done thinking about GenAI for now

Divergent Personal Experiences

  • Some commenters report “exhilarating” gains: faster research, legal navigation, boilerplate code, test scaffolding, documentation, and legacy-code navigation.
  • Others find net-negative value: hallucinated facts, subtle bugs, constant rework, and time lost learning “the right way to hold it.”
  • Several note that individual anecdotes are highly context- and personality-dependent, making global judgments about usefulness hard.

Agentic Coding & Productivity Claims

  • Proponents of agent-based workflows describe a senior-dev-like role: breaking work into small steps, letting agents produce code/PRs, then reviewing with strong tooling and tests.
  • They claim 2–4x productivity in well-architected, well-documented, monolithic-ish codebases, especially for CRUD, transformations, tests, and refactors.
  • Skeptics say this just condenses the worst of code review without the benefit of mentoring a human who learns. Many end up “fighting the model” and finishing tasks faster by hand.

Quality, Maintenance, and “Perpetual Junior” Concerns

  • Common metaphor: LLMs as interns/juniors who never learn, are overconfident, and make bizarre, hard-to-predict errors.
  • People report catastrophic failures in C/C++ and C, better results in Python/JS and API-heavy boilerplate.
  • Worries: code bloat, fragile systems, future refactor hell, and no obvious way for models to make forward-looking architectural choices.

Mandates, Workplace Dynamics, and “Vibe Coding”

  • Executives mandating AI use is described as demoralizing and burnout-inducing, particularly when tools are immature.
  • Some organizations see bottom-up adoption; others spin up “vibe coding” teams cranking out risky, poorly understood features.
  • There’s strong concern about juniors skipping learning, cheating through exercises, and long-term skill atrophy.

Ethical, Social, and Environmental Harms

  • Critics emphasize climate impact, education degradation, trust erosion, and training on “mass theft” of data.
  • Some argue continued enthusiastic use despite acknowledged harms reflects privileged users who won’t bear the worst consequences.
  • Others dismiss “ethical AI” as incoherent in an arms-race dynamic and compare the situation to an AI-powered legal/prisoner’s dilemma.

Non‑Coding Uses

  • Several highlight high value in non-code domains: gardening, plant diagnosis, household repairs (via vision), system design brainstorming, and “better Google.”
  • Counterpoint: high-quality books and expert-written resources often remain more reliable than model output polluted by low-quality web content.

Epistemic Uncertainty & Theory

  • One line of discussion: LLMs are fundamentally memorizers with messy, entangled representations; continual retraining makes stable “theories” of their behavior hard.
  • This anti-inductive character, plus moving goalposts in tooling and models, contributes to fatigue and reluctance to keep evaluating the tech.