The future of everything is lies, I guess – Part 5: Annoyances

Access to the blog / UK blocking

  • Some UK readers can’t access the site; others note the domain is reachable.
  • Linked post by the author explains deliberate IP geoblocking of the UK, framed as a response to the Online Safety Act.
  • Debate over whether this is meaningful “protest” or just a necessary way to avoid compliance/liability.

LLMs, manipulation, and capitalism

  • Many see LLMs as another tool to widen class divides, optimize extraction, and intensify already‑existing manipulative practices (dynamic pricing, claims denial, engagement farming).
  • Others argue manipulation has always been central to markets and media; AI just scales it.
  • A minority highlight positive potential: cheaper software, research acceleration, and local models empowering individuals.

Customer service, “enshittification,” and AI

  • Strong concern that AI support will become a wall between users and real remedies, further reducing accountability.
  • Several note this trend predates LLMs: phone trees, offshore scripts, and “computer says no” cultures already aim to reduce ticket volume, not help.
  • Some predict good human support becomes a premium differentiator, but others say monopoly/oligopoly power undermines that.

Optimism vs doomerism about AI

  • One camp: society has adapted to cars, phones, GM crops; LLMs are “just tools” and not civilization-ending.
  • Opposing camp: collapse/decline is plausible; AI could accelerate existing negative trajectories, and benefits so far feel incremental.

Trust, information quality, and communities

  • Nostalgia for the “old internet” as a relatively high‑trust whalefall now being eaten by spam, ragebait, and AI slop.
  • People expect retreat into high‑trust zones: in‑person ties, closed chats, brands with reputations.
  • Proposals split between proof‑of‑human / real‑identity networks and rule‑based spaces where it doesn’t matter if participants are bots.

Regulation, incentives, and accountability

  • Many argue the root problem is incentives under consumer capitalism, not AI per se.
  • Suggestions include stronger consumer law, antitrust, and “anti‑Kafka” style rules requiring reachable human recourse.
  • Others are pessimistic: regulatory capture, weak democracies, and bribery‑driven politics make effective regulation unlikely.

Cognition and discourse

  • Several commenters report friends leaning on AI summaries, leading to shallower engagement and shorter attention spans.
  • A cited study claims prolonged LLM use in writing tasks reduces cognitive effort and performance over time.
  • Some intentionally move back to long‑form email/letters to preserve deeper thinking.