I used AI. It worked. I hated it
AI Coding Workflow & Permissions
- Many dislike step-by-step approval flows; reviewing and pressing “accept” for each small change feels excruciating and unproductive.
- Others advocate “YOLO mode” (auto-accept edits) in a sandboxed branch/VM, then reviewing work in larger chunks, like a PR.
- Some argue current permission prompts are mostly “security theater”; better to isolate the environment technically and let the agent run freely inside.
- Even with wide permissions, people recommend limiting to a few agents at a time and keeping manual control over commits.
Quality, Verification, and Testing
- Common complaints: hidden bugs, partial implementations, weird architectural choices, and fragile shortcuts (e.g., modifying tests instead of fixing code).
- Several note that if you don’t carefully review, “gems” of bad decisions can poison the codebase and future LLM-assisted sessions.
- Verification and evaluation are seen as new major bottlenecks; the “fun” of coding shifts into tedious review.
- Others counter that this can be mitigated via automated tests, linters, TDD, fuzzing, and even LLM-driven exploratory testing and security review.
Productivity Gains vs Time Costs
- Some say LLMs now clearly outpace manual coding for routine tasks (CRUD views, small scripts) where the human already understands the solution.
- Others insist that for them, back-and-forth corrections make LLM use as slow or slower than writing code directly, especially for nontrivial logic.
- There’s disagreement on whether poor results reflect “using it wrong” versus inherent limits, or niche domains with weak training coverage.
Attitudes, Emotions, and Professional Identity
- One recurring frame is “five stages of grief” over AI: denial, anger, bargaining, depression, acceptance. Commenters place themselves at various stages.
- Some adopt an “adapt or die / shape up or ship out” stance, predicting that those who resist will be replaced by enthusiastic users.
- Others reject this as hype-driven fatalism, stressing integrity, craftsmanship, and concern for social risks, even talking about “loom smashing” and sabotage.
- There’s anxiety about loss of intellectually stimulating work, shallower future programmers, unstable/bad UX, and potential mass unemployment.
Capabilities, ‘Understanding’, and Future Trajectory
- One side emphasizes that LLMs are just next-token predictors that don’t truly “understand,” implying they may always need close human supervision.
- The opposing view points to emergent reasoning-like behavior, real-world impact, and rapidly improving performance as evidence they do, in practice, “understand enough.”
- Some see LLMs as “idiot savants”: great at generating boilerplate and variations, weaker at deep architecture and long-horizon decision-making.
- There is no consensus on whether tools will inevitably surpass humans end-to-end, or remain powerful but unreliable assistants.
Domain and Generational Perspectives
- In medicine and data-heavy domains, practitioners reportedly “love” AI for documentation and analysis, sometimes treating it as a reasoning peer (with human checks).
- Junior developers are described using agents aggressively, piping errors back into the loop, auto-testing, and shipping more ambitious projects than older devs did at that stage.
- Critics worry these juniors may become overly dependent on proprietary services and produce unmaintainable “slop,” while others argue experienced devs can adopt the same workflows if beneficial.
- Some note niches (e.g., safety-critical systems, kernels) where LLM-generated code is currently uncommon and higher assurance is required.