Engineers who dismiss AI

Reported Benefits and Workflows

  • Several engineers describe dramatic speedups on real projects: e.g., a LoongArch emulator with JIT and cross‑platform support in ~2 weeks, complex CI/CD and DevOps debugging, cross‑file refactors on a decade‑old Java codebase, and full UI rewrites of legacy apps.
  • AI is praised for:
    • Boilerplate, scaffolding, bindings in unknown languages, CRUD endpoints.
    • Quick documentation / API lookup, avoiding Stack Overflow & docs spelunking.
    • Refactors, test generation, log/metrics plumbing, search‑like diagnosis of subtle misconfigurations.
  • Many frame it as a fast but fallible junior dev: good at setup and routine work, weak on deep unknowns and tricky runtime behavior.

Skill Degradation and Learning

  • A recurring worry: “every time I use AI, I feel a bit dumber,” fear of losing foundational skills and creating a generation unable to code without AI.
  • Comparisons to calculators, libraries, higher‑level languages: tools always reduce some kinds of practice, but can free time for higher‑level problems.
  • Proposed mitigations:
    • Use AI only for tasks you’d comfortably delegate to a junior/contractor.
    • Ask for hints instead of full solutions; practice “coding gym” style without AI.
    • Keep ownership of architecture, correctness, and debugging, not just prompting.

Code Quality, Tech Debt, and Maintenance

  • Multiple leads report a clear quality drop when teammates “vibe code”:
    • Bloated PRs, duplicated utilities, unused endpoints, sloppy state machines, hallucinated APIs/options, over‑engineered or mis‑architected code.
    • AI‑generated documentation diverging from actual behavior, creating long‑term cleanup projects.
  • Some say AI lets weak/inexperienced devs produce more low‑quality output faster, overwhelming reviewers and accelerating tech debt.
  • Others argue careful use (tight prompts, strong review, limiting scope) yields ~90% good changes and makes large refactors and test additions tractable.

Productivity Gap and Evidence

  • Strong disagreement over whether AI users are “pulling ahead”:
    • Some report 5–10x subjective gains, more projects finished, faster refactors.
    • Others see no speedup, or even slower progress once review, fixes, and long‑term maintenance are accounted for.
  • Several commenters ask for rigorous studies; others distrust existing ones or note they often show modest/negative impact.
  • Many note uneven impact: great for prototypes, scripts, UIs; much less so for mission‑critical, mathematically heavy, or highly regulated code.

Ethical, Structural, and Dependency Concerns

  • Concerns about:
    • Proprietary, remote tools becoming critical dependencies; loss of control over general‑purpose computing.
    • IP leakage and cloning risks when feeding code into cloud models.
    • Concentration of power and the familiar pattern of subsidized tools entrenching, then rent‑seeking.
  • Some want strong local models to avoid corporate dependence; others doubt local models will keep up with frontier systems.

Different Roles, Values, and Use Cases

  • Some engineers simply enjoy programming as craft and reject generative AI on principle, likening it to outsourcing art or music creation.
  • Others see themselves as “software builders” whose real job is shipping products; for them, natural‑language prompting is just the next abstraction layer.
  • Neurodivergent programmers (e.g., ADHD) report context‑switching to chat as painful; inline/fast tools help somewhat, but many only use AI for narrow queries.

Hype, Skepticism, and Debate Style

  • Many criticize AI evangelism as smarmy, FOMO‑driven, and reliant on straw‑man caricatures of skeptics.
  • Pro‑AI participants counter that critics often dismiss tools based on outdated experiences or edge failures.
  • Several point out that underlying values differ: some optimize for speed and breadth of output, others for long‑term maintainability, understanding, and autonomy—so consensus on “right” use (or non‑use) is unlikely.