Token growth indicates future AI spend per dev
Skepticism about the $100k/dev/year figure
- Many see $100k as arbitrary “sticker shock math” with no real justification; likely chosen to echo a mid/high developer salary.
- Back-of-envelope numbers (3–5 parallel tasks, a few hundred dollars/month each) land closer to ~$20–25k/year in tokens, unless context sizes and task complexity grow a lot.
- Several argue that if AI assistance adds maybe 10–20% productivity, it’s hard to justify spending more than another full-time developer’s salary on tokens.
- Comparisons to expensive chip-design tools note that those costs are per seat, shared, and still typically far below $250k per engineer.
Impact on developers, productivity, and demand
- Disagreement over whether AI yields 20x productivity or more modest gains offset by extra review and debugging.
- Some claim AI will reduce demand for many standalone applications (LLMs as interfaces), cutting certain dev roles even as productivity rises.
- Others expect Jevons-like effects: cheaper “automation” → more software built, more internal tools, less SaaS, and reduced reliance on external vendors.
- Debate over whether “10x devs” become “100x with AI” vs evidence that AI-assisted dev can be slower due to verification overhead.
Open source vs proprietary; local vs cloud
- One camp expects open-source models to be “good enough” locally within a few years, making ~$10k workstations competitive with cloud inference for heavy users.
- Critics say local models are still significantly worse and slower; frontier proprietary models will stay ahead, with no clear point where “good enough” freezes.
- Discussion of VRAM cost and hardware limits: some predict cheap 100GB+ accelerators in <10 years, others note memory prices have been flat.
- Enterprises split: some already self-host models for IP/security/safety-control reasons; others have gone cloud-only and are unlikely to rebuild data centers.
Economics of AI tools and pricing models
- Many tools use a “gym membership” model: flat subscription, heavy users subsidized by light ones. Some may be effectively selling $200 plans with $400 of tokens, betting on falling unit costs.
- Commenters liken this to Uber-style subsidy: not sustainable, especially when training is also expensive.
- Cloud analogy: unit prices may fall, but usage grows faster; without close monitoring, AI costs will still climb.
- Concerns that vendors seek lock-in; businesses are advised to maintain an open-weights fallback to avoid future “enshittification” or abrupt price hikes.
Parallel agents and practical limits
- Individual devs report cognitive limits at ~3–5 concurrent agent tasks if outputs are properly reviewed.
- Some see token growth driven by more parallel agents and longer “reasoning loops,” but question how much human oversight will realistically scale.
Broader and social angles
- Worry that “AI spend” narratives will justify suppressing developer salaries while offloading drudge work to AI.
- Doubts that 20x acceleration will benefit society broadly given existing inequality; suggestions of taxation or public/NGO programs to fund on-prem rigs for disadvantaged devs.