Be Worried

Possibility of Resisting or Regulating AI Trajectory

  • Debate over whether “the madness” can be stopped: some argue history shows we can curb tech (e.g., cloning, nuclear use); others see tech momentum and human inaction (e.g., climate) as proof we won’t.
  • Several reject “technological inevitability,” saying all tech persists only because humans choose to fund and enable it.
  • Suggestions include AI-focused grassroots activism similar to FSF/ACLU, and global regulation of large proprietary models; others reply that tech is already widely regulated and this need not be bad.

Cultural and Social Media Impacts

  • Many see AI as qualitatively different from past fads (Web3, NFTs, VR, etc.) because AI-generated “slop” is now everywhere in ordinary media consumption.
  • Photographers and creators report abandoning platforms like Instagram due to algorithmic bias toward AI content, reels, and influencer material.
  • Short-form video and infinite scroll are cited as having already degraded attention and discourse; adding AI generation is seen as intensifying this.

Manipulation, Mind Control, and the Infosphere

  • Strong resonance with the article’s “Matrix twist”: not pods, but real-world humans whose thoughts and feelings are machine-generated for control.
  • Some think AI is just the latest manipulative medium (like TV and ads) and not uniquely dangerous; others stress the new scale, personalization, and automation (thousands of targeted AI videos).
  • There’s disagreement on LLMs’ net effect:
    • One camp says they often give more balanced, rational answers than partisan media.
    • Another points to evidence that models tend to validate users and can amplify delusions, especially in very long chats.

Trust, Truth, and the Future of the Web

  • One vision: the internet fills with trash → people revert to trusted authorities, provenance markets, and smaller gated communities (forums, Discord-like spaces).
  • Others argue “central truth” is gone for good; people will just cluster around preferred authorities, including “the Algorithm” or LLMs.
  • Concern that AI may destroy the “good faith” that made the early web special, pushing people either off the open web or into heavily filtered enclaves.

Strong vs Weak AI, and Existential Risk

  • Some criticize earlier rationalist focus on “strong AI” extinction risk as a distraction from tangible harms of current “weak AI” and from climate change.
  • Others remain convinced that more powerful AI could still lead to human extinction within years, provoking pushback that this is sci-fi-style speculation.

AI Content Quality, Detection, and Adoption

  • Disagreement over claims that AI detection is “barely better than random”: many still find AI text and images obviously detectable, especially low-effort slop.
  • One side asserts “most people hate AI content” and platforms will prefer real-person creators; opponents say people only reject obviously bad AI and that AI can be styled and personalized to appear uniquely human.
  • Debate over AI influencers: some note strong backlash and practical limits (real-world presence, events); others respond that rapidly improving video gen will erode these barriers and that backlash depends on detectability.

Individual Responses and Ethics of Consumption

  • Some refuse to “be worried,” arguing constant panic erodes personal agency.
  • Others recommend:
    • Using the internet to learn and analyze, not to “consume content.”
    • Avoiding AI-written code or help until after struggling on one’s own.
    • Returning to paid, niche, or human-curated platforms and communities.
  • A few are working on tools to use LLMs to restore metacognitive skills rather than replace them.

Critiques of the Article’s Core Assumptions

  • Several commenters challenge the article’s key premises:
    • No evidence that AI-optimized content is “inherently superior by dopamine output.”
    • The conclusion that people will be “mind-controlled by LLMs and their handlers” is seen as asserted, not demonstrated.
  • Others argue the article underplays that algorithmic manipulation has already been the norm on major platforms for a decade; LLMs are an extension, not a beginning.