The future of everything is lies, I guess: Where do we go from here?
Use of AI at Work and Coercive Incentives
- Many feel economically forced to use LLMs despite misgivings; refusal may mean job loss or stalled careers.
- Some add “AI”/“agentic workflows” to résumés even though they personally cringe, reasoning that HR and management now expect it.
- Others view advertising AI skills as a red flag and prefer to avoid AI‑centric workplaces, even if that means lower‑status or manual jobs.
- Several describe “AI theater”: leadership mandates AI use to “accelerate” feature delivery, leading to massive, unreviewable PRs and long‑term quality worries.
Ethics, Principles, and System Constraints
- Strong divide between “stick to principles even if it hurts you” vs “individual ethics can’t beat market incentives.”
- Some argue personal boycotts are futile without structural change; others insist individual refusal still matters morally.
- There’s frustration with what some call naive systems thinking vs naive moralizing; disagreement about how much individuals can influence large‑scale trajectories.
Deskilling, Metis, and Learning
- Many resonate with the article’s concern that LLMs erode persistence, “muscle memory,” and deep understanding.
- Comparisons to writing, calculators, and Socrates’ critique of writing: new tools always deskill something, but outcomes differ by domain.
- Particular worry about students who rely on AI for coursework, then “crash” on exams; professors report unusually high failure rates.
- Some see a future premium on “pre‑AI” engineers who learned through long, manual struggle and can now use AI more judiciously.
Car Analogy and Technology Externalities
- Long, detailed debate over whether cars were a net positive and how that maps to AI.
- One side: cars (and by analogy, AI) brought vast benefits in logistics, mobility, and prosperity; you must accept some externalities.
- Other side: car‑centric planning produced sprawl, pollution, isolation, and dangerous streets; benefits could have been achieved with fewer harms via different policy.
- This is used to argue both “we should shape AI with regulation now” and “we can’t unrealistically ban or opt out of dominant tech.”
Concrete AI Use Patterns
- Common coding pattern: LLMs for boilerplate, scaffolding, refactors; humans for design, tricky logic, and cleanup.
- Some teams skip reviews for “vibe coders” and instead have stronger engineers refactor their AI‑assisted PRs directly.
- Others deliberately feed “slop” into AI‑driven review cultures, seeing it as giving management what they asked for.
Futures, Risk, and Regulation
- Views range from “LLMs are just another automation tool that will create new jobs” to “they are aligned with elite interests and could entrench feudal‑like power.”
- Suggested responses include: unions, resisting AI mandates, aggressive regulation and liability, opposing datacenter subsidies, and even coordinated strikes or bank runs (controversial and disputed as effective).
- Disagreement on how much doom is warranted; some call the tone excessive doomerism, others see it as proportionate to the stakes.
Information Ecology and Legal Context
- Concern that pervasive AI slop will make “source or GTFO” essential, yet sources themselves may be polluted.
- UK readers note the blog is geo‑blocked due to Online Safety Act concerns; this is cited as illustrating broader information‑control risks independent of AI.