The Future of Everything Is Lies, I Guess: Part 3 – Culture
Access, Geoblocking, and the UK Online Safety Act
- Several commenters in the UK report the article as blocked with a message referencing the Online Safety Act.
- Others note it works in EU countries, suggesting this is not an EU‑wide issue.
- Linked prior post shows the site owner deliberately geoblocking the UK using IP databases, framed as a protest against UK regulation and a pragmatic move to avoid compliance risk, despite collateral blocking of non‑UK users.
Learned Helplessness and “What Can We Do?”
- Multiple comments lament cultural resignation to propaganda, advertising, and AI‑driven manipulation.
- Some argue critics and pundits profit from analysis, not solutions, and avoid advocating disruptive tactics like general strikes or mass civil disobedience.
- Others ask concretely how to “fight back,” suggesting local organizing, or even “forking the internet” into alternative, less ad‑driven networks.
- Counterpoints note entrenched infrastructure and business models make such forks difficult, and some doubt any purely technological fix exists.
Culture, Media, and Existing Critical Traditions
- Commenters point out that manipulation, “slop” media, and surveillance long predate current AI; critical theory and sociology had already analyzed these dynamics.
- Some criticize tech culture for ignoring humanities traditions and reinventing old insights through blogs and sci‑fi instead.
AI in Fiction, Myth, and Magic
- Several connect the article’s themes to TV shows, novels, and short stories about omniscient or assimilating systems.
- Others see LLMs as akin to spirits, sprites, or occult technology: powerful but alien in their conception of truth, requiring supervision.
- There is debate over whether calling programming or technology “magick” is a useful metaphor or just obscurantist branding.
Mandated AI Tools and the “Agentic Era” of Software
- A major subthread discusses workplaces mandating AI coding tools (e.g., Claude Code).
- Startup leaders frame this as survival: more features, more code, 3–10× output through “agentic” workflows and unified prompting.
- Engineers express unease: fear of massive technical debt, loss of craftsmanship, and being replaced once their work is routinized through LLMs.
- Some see value in mandates for teams that previously resisted tests/automation, but worry devs will ship better‑looking code without understanding why.
Automation, Craft, and Democratization Skepticism
- Several argue claims about “democratizing” software/knowledge mask a drive to extract workers’ “thought tokens” and replace skilled labor.
- Concerns include erosion of craft and mastery, cultural overvaluation of efficiency and growth, and uncertainty over where human judgment and taste will live when “building” becomes commoditized.
Philosophical Framing and Consciousness
- Commenters link LLM behavior to thought experiments like the Chinese Room and philosophical zombies, arguing this shows the limits of Turing‑style tests.
- Others note that even simple stochastic text generators from decades ago already hinted at these issues, and are surprised this analogy isn’t more widely internalized.
Miscellaneous Reactions
- Some readers reject AI erotic/companionship use cases as deeply off‑putting.
- One playful thread imagines simulating many copies of oneself with superpowers, tying into the article’s themes of synthetic realities.