The future of everything is lies, I guess: Work
UK Online Safety Act and Blog Blocking
- Several UK readers only see an “Unavailable Due to the UK Online Safety Act” page.
- Some argue a personal blog with comments is exempt per Ofcom’s checker; others say comments are still “user content” and thus risky.
- Ofcom’s tool is described as indicative, not legal advice; posters note real scope will be defined by courts.
- Some see the block as over‑cautious but understandable; others as a political protest.
AI, Labor, and Class Dynamics
- Many expect ML/LLMs to shift power and money from labor to capital, accelerating existing inequality.
- Debate over “CEOs and billionaires bad”: some see necessary class critique; others warn it leads to learned helplessness and normalizing bad behavior.
- Unions and professional self‑regulation are proposed as defenses, contrasting software with more protected professions.
- Discussion of “working class vs owning class,” with software engineers framed variously as workers, “house slaves,” or minor nobility.
LLMs in Software Development: Witchcraft, Slop, and Productivity
- Strong split between:
- Advocates reporting 2–10x productivity, easier refactors, more consistent code, and new solo‑founder possibilities.
- Skeptics emphasizing hallucinations, subtle bugs, security hazards, and the impossibility of safely “spot‑checking” large AI outputs.
- “Witchcraft”/incantation metaphor resonates: prompting feels like spell‑casting, with fragile rituals and latent disasters.
- Disagreement over whether bad outcomes are tool flaws or workflow/permission‑design flaws.
- Concern that rapid AI‑driven change increases technical debt and shifts risk onto downstream maintainers and users.
Pace and Shape of AI Progress
- Ongoing argument: Are we near a plateau (logistic curve) or still at the bottom of compounding “stacked sigmoids”?
- Some see only modest headroom in current LLM architectures; others predict much more capability and pervasive agents.
- Singularity talk divides commenters: some use it strictly as “beyond-prediction point,” others reject the whole frame as cranky or misleading.
Automation, Safety, and Human Factors
- Frequent references to aviation, nuclear safety, and remote surgery as prior art on automation risks.
- Concepts like “automation/vigilance fatigue” and de‑skilling are seen as directly relevant to AI agents.
- Air France 447 and Tesla/FSD are debated:
- One side: automation largely improves safety; anecdotes are overused.
- Other side: rare failures in highly reliable systems are especially dangerous, and humans are poor monitors of such systems.
Deskilling and Cognitive Offloading
- Examples: surgeons losing hands‑on skill when relying on robots; drivers losing spatial navigation skills when relying on GPS.
- Historical analogy to worries about writing degrading memory, with pushback that LLMs differ because they do “the reading and understanding,” not just storage.
Economic Futures, UBI, and Open Models
- If AI replaces many white‑collar jobs, posters worry about who captures the surplus: big tech vs society (UBI).
- Open‑weights are seen by some as a partial counterweight to centralization, but others note hardware, energy, and materials could simply become the new chokepoints.
- Questions raised about how UBI would treat former high earners vs low earners; analogy to steelworkers who never found equivalent work.
Personal and Professional Coping
- Some find AI tools exhilarating but mentally destabilizing: solo devs feel pressured to “do everything” (product, infra, marketing) now that coding is faster.
- Suggestions include narrowing focus, talking more with clients, and “course‑correcting” to sustainable roles.
- Broader worry that AI will intensify alienation, shallow “easy” interactions, and social intolerance, even if it makes codebases cleaner.