Most-read tech publications have lost over half their Google traffic since 2024

Overall sentiment about tech publications

  • Many participants say they won’t miss most big tech sites, calling them SEO-driven “slop,” thin product reviews, and affiliate spam that polluted search results.
  • Others note some genuinely useful outlets (e.g., how‑to/tutorial style sites) and worry about who will document consumer-level tech if they vanish.
  • Several blame years of enshittification: autoplay videos, pop‑ups, chumbox ads, cookie banners, paywalls, and low-effort content optimized for speed and keywords.
  • Some argue these sites squandered their goodwill; their decline would have come even without AI.

LLMs as the new interface to information

  • Multiple anecdotes of using LLMs (often with images) to diagnose hardware issues, wire PSUs, fix bricked laptops, or extract recipes from unusable ad-heavy pages.
  • Fans say LLMs feel like a return to early Google: they surface real manuals/docs, filter out SEO farms, and synthesize across sources.
  • Critics stress hallucinations and dangers of trusting LLMs for high-risk tasks (e.g., hardware wiring), recommending validation via manuals or multiple models.
  • Some report LLMs working well on mainstream code patterns but failing on unusual, private, or niche codebases.

Incentives, knowledge production, and “tragedy of the commons”

  • Strong concern that LLMs redirect search away from original sites, destroying ad-based incentives to create content.
  • Several describe this as a classic tragedy of the commons: short‑term gains from free training on the open web, long‑term degradation as sources die.
  • Questions raised: What happens when LLMs need up‑to‑date info? Can training corpora exclude AI‑generated text? How to sustain “organic” knowledge?
  • Some suggest futures where LLMs or platforms pay for curated data; others doubt AI companies will pay meaningfully.

Ads, Google, and business models

  • Many see Google’s AI overviews as keeping users on Google, siphoning clicks from publishers, while Google eventually inserts ads into AI responses.
  • Observations that Google’s ad revenue hasn’t yet fallen, even as publisher traffic reportedly collapses.
  • Widespread frustration with ad-driven UX, but also recognition that subscriptions and micropayments are hard to implement, even if they might support higher-quality reporting.

AI slop, feedback loops, and manipulation

  • Fears that AI‑generated content will increasingly train future models, leading to low-quality “AI slop” and bias.
  • Mentions of AI outputs already echoing obscure forum threads, enabling easy astroturfing and narrative shaping.
  • Some speculate pre‑LLM web text already has a premium as “clean” training data.