AI is making us work more
Economic impacts: productivity vs who benefits
- Many argue AI-fueled productivity won’t reduce work hours; it will raise expectations and output targets, with gains captured by employers and shareholders rather than workers.
- Several compare this to the industrial revolution and automation generally: more output, often more inequality, not less work. Others counter that over long periods productivity has raised broad prosperity (shorter work weeks, retirement, cheaper goods).
- Strong focus on capital vs labor: if you own the business or freelance on fixed-price contracts, you can “capture the efficiency”; if you’re an employee, efficiency mostly means “do more for the same pay” and higher layoff risk.
- Some worry AI plus robotics could render most labor redundant, eliminating social mobility and forcing major systemic changes (UBI, new economic models) or risking unrest.
Energy, resources, and “too cheap to meter”
- One subthread debates whether AI or tech more broadly could make energy, water, food, and housing extremely cheap.
- Optimists envision AI-accelerated R&D (fusion, robotic farming, automated permitting/building).
- Skeptics note historical rebound effects (Jevons paradox), AI’s current energy intensity, fossil-fuel depletion, and political constraints on housing; they doubt abundance will translate into low consumer prices given monopolistic dynamics.
Workplace reality: more work, more oversight
- Commenters describe AI removing “friction” (regexes, boilerplate, small debugging) so they can ship much more, but this turns into more features, more meetings, and higher performance expectations, not more leisure.
- Several describe 996-style or near-996 cultures at AI startups: founders and early employees working extreme hours, with AI framed as a way to go even faster.
- Automation at work differs from home automation: a dishwasher gives personal free time; workplace automation just frees you to be assigned more tasks.
Developers: acceleration, slowdown, and code quality
- Some report huge personal gains: solo builders and ex-devs using LLMs to revive startups, build MVPs, and move from “grind-y coding” to architecture and product work.
- Others say LLMs create more work: non-deterministic, hallucinated code, shallow “vibe-coded” PRs, and more QA and mentoring overhead. One mentions data showing AI-assigned devs actually took ~19% longer per task while believing they were faster.
- Debate over whether LLMs are “superhuman” in languages and coding vs basically 20–90% right and then fatally wrong. Many only trust LLMs for constrained, verifiable tasks; critical code and algorithms remain manual.
Ethics, billing, and career strategies
- Contractors discuss whether to bill by time or value: some openly “capture the efficiency” (bill the old 3h even if AI made it 15 minutes), others call that fraud unless pricing is explicitly fixed-scope.
- Several advocate quietly automating your job for your own benefit (more free time, side projects, or second job) because visible productivity gains just reset expectations and don’t raise pay.
Automation, burnout, and culture
- Multiple stories: automation and process improvements leading to higher throughput, more QA, more bugs found, and more stress, with little reward; coworkers sometimes resist learning automation to avoid raising the bar.
- Many see the core problem as cultural and structural: a work-obsessed, shareholder-first system where any efficiency is converted into more work, not better lives, and where AI becomes just a “bigger shovel.”