The trust collapse: Infinite AI content is awful

AI “slop” and collapsing signal-to-noise

  • Many commenters report being overwhelmed by AI-generated “slop”: low-quality ads, YouTube shorts, Instagram shopping posts, and even fake cute-animal videos that erode enjoyment and trust.
  • Visual artifacts (weird cars, plastic-looking actors) and stylistic tells (LinkedIn-esque cadence, random bolding) are now used as crude filters, even at the risk of discarding genuine work.
  • The core loss: the internet stops feeling like a window into real people and real events, because any plausible content might be synthetic.

Marketing, engagement, and capitalism

  • Several argue the problem long predates AI: marketing and growth-at-all-costs incentives already wrecked online signal-to-noise; AI just automated and amplified it.
  • Engagement-optimization, not “AI” per se, is blamed for exploiting human weaknesses (dopamine hits, outrage, addictive feeds).
  • Some frame this as a systemic property of capitalism / VC culture (embedded growth obligation, paperclip-maximizer analogy), where every channel gets mined until it fails.

Trust in institutions, media, and expertise

  • Yuval Harari’s “build trusted institutions” idea sparks debate:
    • Some say this is exactly what’s breaking: media, governments, and science are perceived as captured, biased, or unaccountable (COVID, Iraq war, billionaire-owned outlets).
    • Others emphasize that distrusting one source doesn’t justify trusting worse alternatives; people use inconsistent standards when judging mainstream vs fringe claims.
  • There’s widespread concern that deliberate campaigns have eroded trust in press and institutions, creating fertile ground for AI-powered misinformation.

Filtering, curation, and reputation

  • Many see earned trust and layered curation as the only sustainable response: RSS with aggressive pruning, trusted communities (HN, some subreddits), and future “PageRank for human/trust content.”
  • Some expect a premium on clearly human, local, or in‑person relationships (recommendations for builders, US-based call centers, “office down the street” sales).
  • Others are more optimistic about AI as a tool for verification and deep research, if grounded in sources and explicit citations.

Business models, sales, and what’s next

  • AI lets low-effort scammers and tiny startups look as polished as large incumbents, making it harder to assess longevity (“will you be here in 12 months?”) and increasing perceived risk of subscriptions.
  • Inboxes of creators and companies are being flooded by AI-personalized outreach, making genuine contact harder; some foresee bots mimicking real support dialogues before pivoting to a pitch.
  • A few argue infinite AI content may force healthier trust/reward systems or a shift back to smaller, reputation-driven networks; others doubt such systems can be “fixed” at all.