Kagi Assistants

Kagi as a backend for LLMs vs Google/Bing

  • Several commenters highlight that Kagi’s filtered, low-noise index appears to make the same LLM perform better than with Google/Bing, especially by surfacing relevant sources (often Wikipedia) higher and avoiding SEO garbage and ad-heavy sites.
  • Others push back that “show Wikipedia higher” isn’t a complete strategy and question how this helps with queries like product recommendations (“top luggage for solo travel”).
  • Kagi staff claim they do particularly well on product research because they downrank low-quality, tracker-heavy review sites.

Search quality, incentives, and “slop”

  • Many argue Google is structurally incentivized to keep garbage in results because those pages run ads and were cultivated by SEO. They doubt Google will fix this for normal users, though it might for its own agents.
  • Kagi is praised for initiatives like SlopStop and for explicitly positioning LLMs as tools for search, not as autonomous “AI agents.”
  • Skeptics counter that all vendors say they “keep LLM flaws in mind,” and that in practice AI summaries often just repackage the same low-quality content.

Kagi’s focus: search engine or AI company?

  • A vocal group wants Kagi to “just do search” and fears user money is being diverted into “bullshit AI products,” comparing this unfavorably to Google’s AI search.
  • Kagi representatives respond that:
    • Assistants are built on top of search and intended to enhance it (e.g., research/workflow tools, not long-form auto-writing).
    • Over 80% of members use Kagi Assistant/AI features.
    • They plan to separate subscriptions for search vs advanced AI.
  • Some users worry about selection bias: people who dislike AI may avoid Kagi entirely, so usage stats don’t prove broad demand.

Pricing, tiers, and regional concerns

  • Kagi is seen as “expensive for a search engine,” especially outside the US; some request cheaper, search-only plans or pay-per-search credits.
  • Others argue $10/month is trivial for developers and note that advanced AI features live on the higher “Ultimate” tier, so basic search users needn’t subsidize heavy AI.

User experiences with Kagi Assistants

  • Fans report roughly 50/50 use between standard search and assistant, finding quick/research modes helpful for complex queries, and appreciating opt‑in triggers like ?, !quick, !research.
  • Several users say Kagi’s research assistant handled known hallucination traps better than Gemini and some other models, often by explicitly admitting it couldn’t find matching information.
  • Others see little difference between Kagi’s quick assistant and directly calling a strong base model (e.g., Kimi), and struggle to identify when to prefer Kagi’s managed assistant.

Designing around LLM “bullshit”

  • Kagi’s ML lead describes LLMs as inherently capable of “bullshit” and frames this as a product-design problem: build workflows that keep users in the loop, emphasize citations, and make fact-checking easy rather than asking users to fully trust answers.
  • Critics argue that any system that confidently presents plausible but sometimes-wrong answers will still induce trust, anchoring, and confirmation bias; they doubt UX can fully mitigate that.

Capabilities, limitations, and comparisons

  • Kagi assistants can read URLs and files; some users have built similar URL-reading features in their own products.
  • There are complaints that users can’t easily steer crawling strategies (e.g., constrain categories or force use of a site’s internal search).
  • Comparisons to Perplexity’s “deep research” note that Kagi is often faster; some users want more time/tool calls in exchange for more exhaustive coverage.
  • Kagi’s “Quick Assistant” is described as a managed experience currently backed by Qwen 3 235B, allowing Kagi to swap models and control tool usage and reliability.
  • Kagi MCP server and upcoming assistant/search APIs are mentioned for integrating Kagi search with external tools like ChatGPT/Claude.