Where's the shovelware? Why AI coding claims don't add up
Layoffs, economics, and the AI story
- Several commenters argue recent tech layoffs are driven mainly by the end of cheap money, over‑hiring, and looming recession; “AI productivity” is seen as a convenient narrative to justify cuts and impress investors.
- Others note management believes in near‑term AGI or dramatic cost savings, so hiring more devs conflicts with a strategic goal of shrinking labor.
Productivity claims vs flat output metrics
- The article’s central point—that app stores, Steam releases, domain registrations, etc. show no post‑LLM explosion—resonates with many.
- People challenge 10x productivity marketing: if that were real, we’d see far more games, SaaS apps, and shovelware; instead trends are flat or slightly up.
- Some counter that coding speed was never the main bottleneck: product‑market fit, requirements, integration, and polish dominate timelines, especially in companies.
Where the AI‑written code actually goes
- Many say their AI gains show up as:
- One‑off scripts, glue code, personal tools, migration utilities.
- Internal dashboards, dev‑only tools, refactor helpers.
- This work often isn’t public, so won’t show up in app stores or GitHub metrics.
What LLMs are good at
- Widely cited “sweet spots”:
- Boilerplate, scaffolding, mocks, test skeletons.
- Shell scripts, regexes, config, IaC snippets.
- Explaining APIs/libraries and locating things in large codebases.
- Some report 3–5x speedups for narrow tasks or greenfield prototypes, especially with newer “agentic” tools.
Failures, hallucinations, and quality concerns
- Many concrete anecdotes of:
- Out‑of‑date tutorials, wrong APIs, hallucinated libraries.
- Over‑engineered or redundant code instead of using existing libs.
- Subtle bugs that erase any time saved.
- Net effect for complex/brownfield work is often “a wash” or negative once verification and debugging are counted.
Team dynamics, juniors, and review debt
- Experienced devs worry juniors are “vibe coding” large features they don’t understand, creating unreadable, untested “slop”.
- Code review becomes harder: reviewers can’t assume the author understands the patch; AI‑generated chunks balloon PR size and technical debt.
Management hype and developer backlash
- Multiple stories of managers unilaterally cutting estimates (e.g., to 20% of original) “because we’re an AI‑first company”.
- Developers describe AI as useful but nowhere near the level that justifies layoffs, schedule compression, or salary deflation.
- There’s concern about skill atrophy, especially if core problem‑solving is offloaded, and about entry‑level and non‑technical workers being hit first.
Future trajectory
- Some expect continued, significant improvement (especially with agents), others see diminishing returns already.
- Consensus in the thread: AI is a powerful but narrow tool today, far from the universal 10x coding accelerator being sold.