Phind-405B and faster, high quality AI answers for everyone
Usage patterns and strengths
- Many use Phind as an AI-enhanced technical search engine, especially for programming, APIs, debugging, and infrastructure “how do I do X?” tasks.
- Several report it as a strong productivity booster, getting them from near-zero knowledge (e.g., AWS VPC/NAT/Fargate) to working solutions quickly.
- Common workflows: replacing Google + Stack Overflow; summarizing articles via URL; code optimization and debugging; niche language questions; learning new tech concepts.
- Users like the presence of linked sources when they appear, treating Phind as “search + oracle” rather than pure chat.
Comparisons with competitors
- Compared with ChatGPT, some prefer Phind for citations and technical focus, and as a fallback when ChatGPT has access/captcha issues.
- Others prefer Kagi Assistant, Brave Search, Bing + GPT‑4o, Perplexity, or Claude for equal or better answers, broader features, or fewer UI issues.
- Several note Phind-70B and now 405B can be competitive with Claude/GPT‑4 on some coding tasks, while GPT‑4 remains best for certain formatting tasks.
Hallucinations, accuracy, and verification
- Multiple reports of confident but wrong answers: nonexistent language features, incorrect C++/Laravel examples, misdescribed hardware, and factual questions without valid references.
- Users appreciate when Phind later admits a reference error or, in newer runs, detects nonsensical queries and corrects itself.
- Some say “Always search” sometimes fails to trigger; others see answers improving when rerun.
- General consensus: model is powerful but must be treated skeptically; follow‑up questions and checking sources remain essential.
Speed vs. quality
- Thread discusses latency as a key barrier for AI search versus classic search.
- Some argue that while token-by-token generation is slower, total “time to understanding” can be faster than traditional search, provided answers are accurate.
Product experience and UI
- Positive: VS Code extension, “artifacts”-style features in development, improved search pollution, and better answer organization promised.
- Negative: buggy web UI (scroll jumps, input obscured on mobile), occasional inference outages, region blocking (e.g., Malaysia), and some users being IP‑blocked.
Pricing, access, and “for everyone” claim
- New Phind‑405B is only for paid Pro users; “for everyone” is interpreted by some as misleading marketing.
- Phind Instant remains free; some want at least a small free quota for 405B to trial it.
- Pricing criticized for having only a $20/month tier; some want cheaper, low‑usage plans.
API, ecosystem, and openness
- Many request an API and OpenRouter‑style access so they can integrate Phind into their own tools and compare it on public leaderboards.
- Company indicates API is lower priority than the main product but is now under consideration.
- Some want weights released (especially Instant/70B), with debate over whether Llama’s license requires that; unclear from thread.
- Concerns raised about opaque data handling and trustworthiness; one user says attempts to clarify for corporate use went unanswered.
Model behavior and philosophy
- Long subthread critiques LLM “apologies” and anthropomorphic phrasing as misleading, since models lack real understanding, memory of wrongdoing, or capacity for genuine care.
- Others stress that hallucinations and lack of “I don’t know” are structural to current LLMs; research on source‑aware training and better reasoning is referenced as a path forward.
- Some propose using LLMs mainly to generate good search keywords and filter human-written sources, rather than as direct answer generators.