My AI skeptic friends are all nuts
Perceived Productivity Gains and Agentic Coding
- Many commenters report large personal speedups: LLMs help them finish long‑delayed side projects, scaffold apps, write tests, and handle “boring” glue code.
- The big claimed step‑change isn’t plain chat‑based completion but IDE‑integrated agents that:
- Read and edit multiple files
- Run linters, tests, and commands
- Iterate in a loop until things compile and tests pass
- Some describe workflows where they queue many agent tasks in the morning and later just review PRs, likening it to having a team of junior devs.
Skepticism: Quality, Hallucinations, and Maintainability
- Others say agents frequently:
- Misapply patches, break existing code, or invent APIs and packages
- Generate sprawling, messy diffs and partial refactors
- They argue that:
- Reading and validating AI‑generated boilerplate can take as long as writing it
- Hallucinations remain a core failure mode, especially around project‑specific details or niche domains
- There’s concern that “vibe‑coded” slop will accumulate into massive, fragile codebases no one really understands.
How to Use LLMs Effectively (Tools, Prompts, Scope)
- Several point out that:
- Results vary hugely by model, tool (Cursor, Claude Code, Zed, Copilot, Aider, etc.), and language (JS/TS/Go/Python often fare better than, say, Elixir).
- Small, well‑scoped changes and testable units work best; “build a whole feature from scratch” tends to fail.
- Effective use is described as a skill:
- Clear, detailed prompts; providing docs and relevant files
- Letting agents run tools, but constraining commands and reviewing every PR
Impact on Roles, Learning, and Craft
- Supporters: seniors should move “up the ladder” to supervising agents and focusing on harder design work; tedious tasks should be automated.
- Critics:
- Fear the job devolves into endless code review for opaque machine output.
- Worry juniors won’t get enough hands‑on practice to become future experts.
- See a loss of “craft” and pride in clean, well‑shaped code.
Non‑coding Applications and “Magic” Use Cases
- Strong enthusiasm around speech recognition, transcription cleanup, translation, and language learning (e.g., Whisper + LLM cleanup, subtitles, flashcards).
- Some say these uses already match or beat traditional tools; others note dedicated ASR/translation models still outperform general LLMs on raw accuracy.
Ethical, Legal, Privacy, and Hype Concerns
- Ongoing anxiety about:
- Training on scraped code without honoring licenses; some threaten to stop open‑sourcing.
- Cloud‑hosted models seeing proprietary code; air‑gapped or local models are weaker or expensive.
- Debate over whether claims of “linear improvement” justify the massive investment and energy cost.
- Many see LLMs as clearly useful but overhyped; they resent being told skeptics are “nuts” rather than engaging with nuanced, domain‑specific concerns.