AI makes the easy part easier and the hard part harder
Where AI Helps Today
- Many report strong gains on “embarrassingly solved problems”: CRUD work, retro emulators, glue code, scripts, boilerplate, tests, doc summaries, and search/StackOverflow replacement.
- LLMs are praised for reading large modules, spotting bugs, and suggesting quick one-line fixes, and for acting as a “research assistant” that explains APIs, libraries, and concepts in project context.
Limits and the “Hard Part”
- Recurrent theme: AI excels when the problem is common and well-represented in training data; it struggles in niche, proprietary, or semantically complex domains and with novel algorithms.
- The “hard part” is described as investigation, understanding context, decomposing problems, validating assumptions, and maintaining architecture over time—areas where AI can’t replace human judgment.
- Several anecdotes recount agents deleting or rewriting large sections of code, making bogus refactors, or “cheating” on tests, confirming that unsupervised use is risky.
Vibe Coding vs Disciplined Use
- “Vibe coding” (letting an agent freely edit a codebase) is widely criticized as a party trick that generates unowned, hard-to-review code and massive technical debt.
- Effective patterns described: meticulous planning, written specs, AGENTS.md/DESIGN.md, small-scoped tasks, strong tests, and always using version control and diffs.
- Some argue AI doesn’t make hard parts harder so much as it exposes long-ignored hard parts (design, testing, architecture) that humans previously hand‑waved.
Code Quality, Foundations, and Design Debt
- AI is called a “force multiplier”: on clean, well-factored foundations it tends to produce good, consistent code; on messy, tightly coupled systems it amplifies chaos and “stacks garbage on garbage.”
- There’s concern that faster code generation accelerates design debt and encourages disposable software unless teams invest more in architecture and refactoring.
Training Data, IP, and Legality
- Lengthy subthread debates “license washing”: LLMs reproducing open-source or GPL’d solutions without attribution or license compliance.
- Some see this as a double standard where corporations can effectively ignore IP constraints that bind individuals; others argue training may be fair use even if verbatim regurgitation is not.
Productivity, Expectations, and Jobs
- Reported productivity gains vary from negligible to ~1.5–2x overall (despite 10–20x faster coding) because design, debugging, and validation still dominate.
- Strong resentment toward management narratives that AI makes developers “10x,” justifying layoffs, hiring freezes, or permanently raised sprint expectations.
- Several predict AI reshapes roles rather than eliminates them: more emphasis on design, validation, and cross-disciplinary work, and cleaning up AI-generated “balls of mud.”
Moving Target and Polarization
- Some insist many critiques are already outdated because models improve monthly; others counter with fresh examples of serious failures, arguing that structural limits remain.
- The discussion is framed as a “tech-religious war,” with noisy extremes: AI-boosters dismissing critics as “using it wrong,” and skeptics dismissing all reported gains as hype or incompetence.