AI employees don't pay taxes
UBI and social safety nets
- Several commenters argue there is no realistic funding model for large-scale UBI from AI profits; small “petrostate-style” stipends don’t scale.
- Others counter that pilots and data show UBI works at small scale; the unsolved part is financing it nationally, not its individual effects.
- Some see UBI as politically doomed (resentment at “giving rich people money” and bureaucratic complexity); others say means-testing is costlier, crueller, and often used to sabotage welfare.
Tax base in an AI-heavy economy
- Core concern: payroll and income taxes shrink if humans are replaced by AI “employees,” undermining current funding for states and social insurance.
- Some say the solution is trivial: tax where value flows now—corporate income, data centers’ energy use, revenue from AI services.
- Others doubt governments’ capacity to adapt quickly or fairly, warning of convoluted systems like the existing US tax code.
Alternative tax designs
- Proposals include:
- Progressive “earnings per employee” taxes (criticized as anti‑innovation and wage‑suppressing).
- Land value tax and severance taxes on natural resources, described as “AI‑proof.”
- Consumption/sales taxes, with debate over regressivity versus practicality.
- Tiny taxes on all financial transactions or HFT‑style short-term gains, shifting burden from labor to capital.
- Disagreement over whether focusing tax collection on top earners and corporations is numerically feasible or economically destabilizing.
Capitalism, power, and “techno‑feudalism”
- One line of discussion claims we’re drifting from productive capitalism to “techno‑feudalism,” where a few owners rent AI and infrastructure to everyone else.
- Others push back, saying most firms still add value atop complex supplier networks; the real problem is monopoly and lax antitrust, not capitalism per se.
- Some foresee eventual communism or mass nationalization/taxation of AI firms as the only way to avoid collapse in demand and tax revenue.
Jobs, displacement, and productivity
- Sharp split:
- One side says “AI will take all our jobs” is overblown; like tractors and past automation, AI will reallocate labor and create new, higher‑value work.
- Others report concrete layoffs tied to AI tools and fear a downward spiral: fewer jobs → less consumption → business failures → fiscal crisis.
- Historical analogies (tractors, cars, past sectoral shifts) are used both to calm fears and to note that past productivity gains didn’t deliver the leisure Keynes predicted; instead, gains went largely to owners.
AI as “employee” vs tool
- Critics argue “AI employee” is a misleading metaphor; AI is capital equipment, not a taxpayer or person, and the key issue is tax structure, not anthropomorphizing.
- Some see AI mainly as a force multiplier: better tools mean more software, more automation work, and higher ambition, not less human employment overall.
Governance, inequality, and corporate power
- Commenters worry more about political capture and weak enforcement than about AI itself: corporations already avoid taxes, buy competitors, and shape laws.
- There is frustration that corporate directors rarely face personal consequences for aggressive tax schemes or fraud.
- Some note that without strong regulation and redistribution, an AI‑driven economy could concentrate wealth while leaving masses unemployed or purposeless.
Critiques of the article and discourse
- Multiple readers see the article as internally inconsistent (e.g., citing poor AI output while asking “what are humans for?”).
- Several suspect or detect LLM‑generated writing and are dismayed that even opinion pieces about AI are machine‑mediated.