How ChatGPT serves ads

Overall Reaction

  • Many commenters see this as the start (or acceleration) of “enshittification” of LLM products and the end of a brief “golden age” of relatively clean, high‑quality tools.
  • Others are more accepting, noting that ads only appear on the free and new low‑cost ad‑supported tier, and that higher‑priced plans remain ad‑free for now.
  • Some users say they have already cancelled or will avoid ChatGPT entirely due to ads.

Business Model and Economics

  • Recurrent question: how is “free” LLM inference supposed to be funded if not by ads?
  • Some argue a paid‑only or free‑trial model would be preferable, even if it limited access.
  • Others say advertising has historically been the most effective way to monetize large consumer products; they see this as inevitable, especially ahead of an IPO.
  • The earlier public statement that ads would be a “last resort” is interpreted by some as evidence of financial pressure; others see it as PR / “doublespeak” that always implied ads were coming.

Implementation Details and Ad Blocking

  • The separation of ads into a distinct event stream is seen as clever engineering: enables A/B testing and keeps core model outputs technically separate.
  • People discuss blocking specific telemetry / ad domains or stripping single_advertiser_ad_unit payloads via browser‑layer interception, while noting this can trigger a cat‑and‑mouse arms race.
  • Some expect eventual standardization of AI ad protocols, potentially protected or mediated by browsers.

Trust, Bias, and Invisible Ads

  • Strong concern that future ads will be blended into responses: product mentions, omission of competitors, or “steering” towards more ad‑friendly answers.
  • Some argue blocking “transparent” ads might push companies toward more opaque, embedded ones; others counter that history shows you often get both, so all ads should be blocked when possible.
  • There is debate over whether existing law meaningfully restricts undisclosed sponsored content in LLM replies; outcome is labeled as unclear.

Alternatives: Local and Self‑Hosted Models

  • Several see this as a strong push toward local or self‑hosted LLMs, where ads and data collection can be avoided.
  • Discussion covers:
    • Local models using tools to access the web, similar to hosted models.
    • Hardware tradeoffs: decent models at 64–128GB RAM, smaller but capable models (e.g., Qwen, DeepSeek, GLM, “kimi”) vs aggressive quantization making models “stupid”.
    • Energy and hardware costs sometimes rivaling cloud token costs, so economics are use‑case dependent.
  • Web‑search tools (Tavily, Exa, Firecrawl, etc.) are mentioned, but many have terms allowing training on user queries and sharing data, which concerns privacy‑minded users.

Adversarial Content and “LLM SEO”

  • Commenters anticipate “Generative Engine Optimization”: companies shaping content so models recommend their products, analogous to SEO.
  • Some report anecdotal cases where obscure services got recommended by ChatGPT despite poor traditional SEO, suggesting LLMs can surface niche sites.
  • Suggestions include potential bot farms probing and “arguing with” models to nudge them toward certain services, though this remains speculative in the thread.

Wider Societal and Ethical Concerns

  • Worries about:
    • Highly targeted psychographic ads derived from intimate chat data.
    • Political advertising and propaganda integrated into conversational agents.
    • Defense contracts vs ad revenue as funding sources, with both seen as ethically fraught.
  • A substantial contingent argues advertising as a business model is inherently harmful (attention capture, manipulation) and morally legitimate to resist via ad blockers and by abandoning ad‑funded products.