AI is killing some companies, yet others are thriving – let's look at the data

Content marketing, SEO, and AI “slop”

  • Many report that SEO-driven content marketing is collapsing: long-tail blogspam no longer brings traffic, especially with AI-written competition flooding the web.
  • Some celebrate this (“good riddance” to low-quality SEO pages); others argue AI spam will drown out even good human work by sheer volume.
  • There’s debate whether higher-quality, curated, human-written content will “reign” or if economics favor endless LLM-generated blogspam that gets “80% of the traffic for 10% of the effort.”
  • New “SEO for LLMs” is already being discussed: structuring content so chatbots recommend your product, and expectations that LLM providers will eventually sell ranking/placement.

Q&A sites, community decay, and AI competition

  • Several argue Quora and Stack Overflow were already in decline due to clickbait pivots, paywalls, and hostile/overbearing moderation that alienated contributors.
  • Others defend Stack Overflow’s archives as still uniquely valuable, but note that new, high-quality questions and answers have slowed.
  • ChatGPT is seen as “prime collateral damage” for homework help and Q&A sites (e.g., Chegg), but also heavily dependent on those same sites for training data, raising “killing the golden goose” concerns.

Search behavior shifts and new discovery patterns

  • Many commenters now go to LLMs directly for both technical and mundane questions, using Google mainly for maps or official docs.
  • A common workaround for SEO sludge is appending “reddit” to queries; despite Reddit’s low signal-to-noise, it’s often judged better than affiliate-filled review sites.
  • Some users are moving to curated or paid platforms (Substack, Kagi, Bear Blog) and expect a return to smaller, vetted communities and “web-of-trust”-style curation.

Scraping, bots, and infrastructure pressure

  • Site owners report massive increases in scraping since late 2022, likely from AI training and copycat crawlers, driving up bandwidth costs and degrading performance.
  • Blocking only “honest” bots via robots.txt is insufficient; many anonymous scrapers mimic real users. Captchas and WAFs help but hurt UX and still miss much of the traffic.

Reliability, hallucinations, and long-term data

  • There’s strong skepticism about using LLMs for factual or medical queries; people report hallucinated policies, people, links, and product info.
  • Some see Wikipedia, medical journals, and docs as increasingly important “ground truth” in an LLM-saturated web.
  • A recurring open question: if niche sites, Q&A communities, and specialized verticals (e.g., WebMD, CNET) shrink or die, where will future models get accurate, fresh training data?