What Claude Code chooses
Default stacks and tool biases
- Commenters note the output is heavily web-centric and JS/TS-based: React (often implicit), Tailwind, shadcn/ui, Drizzle, Express, Vercel, GitHub Actions, npm, Supabase, Neon/Fly, etc.
- Some are surprised React isn’t explicitly featured in the report, assuming it’s treated as the unspoken default.
- Tailwind and shadcn/ui are seen as “AI magnets”: easy defaults that produce many similar-looking sites, comparable to the old Bootstrap monoculture.
- Drizzle overtaking Prisma in newer models is praised; Prisma is called “an abomination” by several, Drizzle “the obvious choice” in comparison.
- Traditional clouds reportedly get “zero primary picks”; some see this as deserved due to poor DX.
Overriding behavior with CLAUDE.md / AGENTS.md
- Multiple people confirm that explicit stack instructions (e.g., “use Node+Hono+TS, no Tailwind”, “always use bun/uv”) work reasonably well.
- Others say agent files are often ignored unless phrased as imperative DO/DON’T rules with rationale, more like a linter config than a README.
- Even then, adherence is described as partial (e.g., ~80%), not something to rely on for hard constraints.
Quality of architectural decisions
- Several report Claude Code recommending new third‑party services (Neon, Fly, feature-flag SaaS, etc.) despite existing infrastructure described in memory, and generally over‑engineering: multiple layers, heavy versioning, reluctance to delete code.
- Agents are seen as good at boilerplate and CRUD, but poor at novel or business-specific architecture; human oversight is still required, especially around security and complexity.
- Some appreciate that models often roll their own simple code instead of pulling in many npm packages, but others worry this trades dependency hell for massive duplication.
Advertising, LLM SEO, and bias concerns
- A strong thread speculates about “invisible” product placement: models nudging users toward particular stacks, clouds, or services, possibly monetized or gamed via training-data poisoning and “LLM SEO”/AEO/GEO.
- Others counter that Anthropic appears to use curated expert data and manual tuning, which would resist naive spam tactics, though this is seen as expensive and imperfect.
Impact on developers and non‑experts
- “Vibe coders” and non‑developers using Claude are expected to follow these defaults blindly, which makes understanding those defaults strategically important (e.g., for agencies offering cleanup/productionization).
- Some worry this default‑driven world will entrench popular tools and reduce innovation; others argue LLMs simply mirror community preferences already present in the training data.