Claude can now search the web
Feature scope & rollout
- Web search is added to claude.ai and the official apps, in “feature preview” for paid US users only; free tier and other countries are promised later.
- Not available via API yet, though many expect that to come and to be exposed as a tool / MCP server under the hood.
- Some users note they already wired Claude to search engines themselves via function-calling or MCP; this is seen as making an ad‑hoc pattern “first‑class” for non‑technical users.
Comparison to other AI search tools
- People compare it to Perplexity, ChatGPT web search, Gemini, Grok DeepSearch, and Kagi Assistant.
- Several say this is a catch‑up feature: others have had integrated search for a year or more.
- Claude is widely praised for coding, research assistance, and “conversational partner” use; others find Grok or OpenAI better for code or reasoning.
- Some now do most “search” via AI, using Google only as a fallback or for quick AI summaries at the top of results.
How web search works
- Users ask what backend is used and whether it’s real‑time. One tester saw Claude return summaries and links to their own site without any live hits, implying use of an internal scraped index.
- Later investigation notes Brave Search as the apparent index provider (matching results and Brave being listed as a subprocessor).
- People distinguish between frontends that call a separate search API and an LLM that can plan, iterate, and re‑rank during multi‑step “deep research.”
Robots.txt, crawling, and blocking AI
- Large, contentious debate over whether robots.txt should apply:
- One side: any automated system (including LLM tools) should respect it; ignoring it externalizes costs, harms ad‑funded sites, and will lead to firewalls, WAF rules, CAPTCHAs, and legal pushback.
- Other side: robots.txt was designed for recursive crawlers and indexing, not one‑off, user‑driven fetches; it’s voluntary anyway, not an enforcement mechanism.
- Admins report heavy traffic from various AI bots despite disallow rules, and resort to explicit blocking or tarpits.
- Some suggest new conventions like
ai.txtorllms.txt, but many doubt non‑compliant actors would honor them.
Impact on the web & search quality
- Concern that LLMs “free‑ride” on search engines’ indexes while sending little traffic back to sites, threatening ad‑supported content and encouraging more paywalls and anti‑bot measures.
- Others argue much of today’s web is already SEO spam; AI search plus better re‑ranking (or services like Kagi) might surface higher‑quality material.
- Several note that all current LLM web modes still tend to read the top N results, so they inherit blogspam and low‑quality content; RAG over bad search results is criticized as “garbage in, garbage out.”
- Some fear a “Kessler‑effect” / “Habsburg internet” where AI‑generated slop trains future models, further degrading both the web and AI answers.