OpenAI just put the final nail in the coffin of the open World Wide Web

Impact on the Open Web

  • Many argue Operator/agents threaten existing “open web” usage patterns, not the web’s existence. The web already feels centralized, ad-driven, and “enshittified.”
  • Some see this as just another shift in interface (like GUI over CLI): humans may prefer AI-mediated interaction, while the underlying web persists as a substrate.
  • Others fear that if most people only interact through opaque agents, the visible web becomes niche and “weird,” used mainly by enthusiasts.

Middlemen, Ads, and Business Models

  • Strong theme: agents could disintermediate platforms like Google, TripAdvisor, Yelp, affiliate sites, and ad-funded content.
  • Some welcome the potential death of ad-driven middlemen; others note OpenAI simply becomes a new middleman with even less transparency.
  • Concern that when AI chooses what to buy or recommend, paid placement and “AI tax” will quietly shape choices, similar to current ads but harder to see.
  • If agents bypass sites’ monetization, sites may respond with subscriptions, paywalls, syndication models, or specialized data deals with AI companies.

Agentic AIs, Trust, and User Behavior

  • Divided views on delegating consequential tasks (bookings, purchases):
    • Skeptics: LLMs are too error-prone and non-deterministic; people won’t risk money or important actions.
    • Optimists: people already trust algorithms (recommendations, FSD, online dating, stock trading); they’ll adopt agents once risk feels managed and liability is covered.
  • Proposed compromise: agents draft multi-step plans and actions; humans review and approve high-risk steps.

Bots, Anti-Bot Tech, and Interfaces

  • Debate over whether anti-bot tools (e.g., CAPTCHAs, Cloudflare Turnstile) will protect sites or simply push users toward bot-friendly competitors.
  • Some argue the “right” long-term solution is direct APIs for commerce, with agents translating natural language to API calls instead of driving web UIs.
  • Others predict a technical arms race: sites adding heavy bot defenses, DRM-like screenshot blocking, browser attestation, and client certificates.

Information Quality and LLM Use

  • Many participants still prefer search + primary sites (especially Wikipedia, Stack Overflow) for serious learning and fact-checking.
  • LLMs seen as good for quick overviews, brainstorming, or “conversation starters,” but widely reported to hallucinate and mis-explain, especially in technical/scientific domains.
  • Concern that if most reading is shifted to summaries, original content creators lose direct audience, feedback, and economic incentives.