ChatGPT Search
Backend and architecture
- Several commenters note ChatGPT Search is not a standalone crawler yet, but largely relies on Bing’s index and other third‑party search providers, per OpenAI’s own help docs.
- Some expect this to help sidestep robots.txt blocking of OpenAI’s own crawler; others stress OpenAI currently claims to respect robots.txt, though details (e.g., crawl‑delay) are unclear.
Comparison with Google, Perplexity, Kagi, etc.
- Many see this as a direct shot at Google and Perplexity; some think OpenAI is “late”, others cite Chrome vs. late browsers as evidence timing may not matter.
- Users compare it to Bing Copilot, Perplexity, Kagi Assistant, Phind, Brave Search; some say Perplexity/Kagi still feel better, especially for research and citation quality, others report ChatGPT Search did better on fresh code/library tasks.
Result quality, hallucinations, and reliability
- Mixed reports: some are “super impressed” (e.g., handling a new library, code for niche FOSS), others show obvious hallucinations (fictional book titles, wrong finance models, wrong language versions, weather off by 20+ degrees, made‑up links).
- The value is seen mostly in multi‑step or fuzzy queries (“plan a trip”, “integrate docs across libraries”) rather than precise facts where errors are more glaring.
- People emphasize that without strong source‑level grounding and transparency, LLM answers can be less trustworthy than simply reading the underlying pages.
SEO, spam, and gaming
- There is broad concern that if the underlying web is SEO‑polluted, LLM summaries may just compress garbage.
- Some hope LLMs can learn to down‑rank SEO slop (using model‑level filters, user feedback, or even identifying AI‑generated spam from their own logs), but others expect an arms race: “SEO‑LLMs” trying to game “search‑LLMs”.
Ads, business model, and profitability
- Intense debate over whether OpenAI will eventually add ads:
- One side: search at massive scale can only be paid for by ads, and investors will demand growth, leading to enshittification similar to Google.
- Other side: OpenAI already has substantial subscription revenue; some hope they can avoid or at least compartmentalize ads.
- Several note the huge compute cost of LLM‑based search; question whether ads can cover it if queries are truly chat‑grounded.
Impact on the web and publishers
- Strong worry that LLM search is parasitic: summarizes answers so well that users don’t click through, undermining ad‑funded publishers and long‑tail blogs.
- Others argue much high‑quality content has always been hobbyist and will persist; some see this as a chance to kill SEO‑driven “content farms”.
- People anticipate more paywalls, access deals, and lawsuits; some think search will balkanize around who pays for access.
UX, latency, and access
- Many like the integrated chat + search UX and citations sidebar; others dislike wordy, slow, streaming answers compared to Google’s near‑instant results and simple blue links.
- Currently limited to Plus/Team and waitlist users (with slow rollout to free); some see login‑requirement for search as a privacy red flag.
- There is interest in using it as a browser search engine (custom URL parameters, Chrome extension, Alfred integration), but latency and rate limits are concerns.
Who benefits / use cases
- Power users with strong traditional search skills are split: some see little value, others use LLMs to discover terminology, narrow research space, or stitch together multi‑source answers, then verify via classic search.
- Many foresee this as a building block toward “agents” that not only search but execute tasks (reservations, purchases), raising worries about hidden commercial steering.