Let's be honest, Generative AI isn't going all that well
Quality of the Original Post
- Many commenters see the article as extremely low-effort: essentially four screenshots plus one line of text, with no substantial analysis or argument.
- Some speculate it might even be AI-generated; others note that merely aggregating negative headlines is not serious critique.
Is Generative AI “Going Well”?
- One camp argues it’s clearly transformative already:
- People report 3–10x speedups in coding, prototyping, migrations, refactors, diagrams, mockups, and documentation.
- Non-developers say they’re now building things they never could before.
- Some companies are integrating AI agents into IDEs and internal workflows, and even laying off staff for tasks now handled by tools.
- Another camp finds current tools “slop”: unreliable, overhyped, and mainly impressive on toy or trivial codebases; they doubt claims of massive time savings and note empirical studies showing time losses.
Code Quality, Assets vs Liabilities
- One subthread debates whether code is an “asset” or “liability”:
- Some argue each line of code is future maintenance and risk, so massive AI-generated rewrites are frightening.
- Others counter that code that solves problems and makes money is, economically, an asset, though it can carry risk and technical debt.
- AI-assisted rewrites of large legacy systems are praised by some as newly feasible and condemned by others as a future maintenance nightmare, especially if tests and review are weak.
Capabilities and Limitations
- Pro-AI commenters emphasize:
- Huge gains in scaffolding, boilerplate, refactors, test generation, and triage.
- Growing ability to work across large codebases and long documents, summarize, and reason about structure.
- Skeptics emphasize:
- Frequent hallucinations, loops, incorrect API usage, and brittle behavior even on “basic” tasks.
- Tools that can’t reliably copy examples or avoid making things up are seen as untrustworthy for high-stakes work.
- There’s recurring frustration at discourse: criticism is often met with “you’re using it wrong” or “your expectations are too high.”
Jobs, Training Pipeline, and Society
- Some see AI as a “force multiplier” for skilled developers, not a replacement, at least for now; others fear it will shrink demand for juniors and destroy the talent pipeline.
- Concerns are raised that executives will over-believe marketing, slash staff, and then rediscover they need humans to clean up the mess.
- A few argue AI-generated code productivity partly comes from effectively bypassing software copyright, benefitting large players and exposing how much redundant effort copyright has historically forced.
Hype, Progress, and Market Fit
- Several point to a Gartner-hype-cycle pattern: early magic, now a realism phase.
- Some think progress has recently plateaued in quality and shifted to cost-cutting; others report clear improvements model-to-model in everyday work.
- Distinction is drawn between:
- Underlying tech (which many agree is impressive and improving), and
- Product/market fit for “copilot for everything,” which often disappoints at scale.
Gary Marcus and Predictions
- Multiple commenters consider the author a chronic AI pessimist with a history of bad predictions; others say several of his 2029 “AI won’t be able to…” claims already look shaky or partially achieved.
- Nonetheless, some agree with his broader point: current generative models alone are unlikely to yield AGI and should not be the sole basis for economic or geopolitical strategy.
Net Assessment from the Thread
- Thread sentiment is sharply polarized:
- Heavy users in software and niche workflows overwhelmingly say “it’s going very well for us.”
- Skeptics focus on unreliability, overreach of deployment, and inflated promises from executives and boosters.
- Several commenters conclude that the real unknown is net impact: productivity, quality, jobs, and social outcomes remain hard to measure, even for daily users.