I was a top 0.01% Cursor user, then switched to Claude Code 2.0
Code review vs. “behavior-only” development
- A central claim (“you no longer need to review the code, just test behaviors”) triggers strong pushback.
- Many argue this is untenable for multi-dev, customer-facing systems, especially with SOC 2, SLAs, and security concerns; code review is seen as core to reliability and safety.
- Supporters counter that high-velocity teams already lean on telemetry, error budgets, feature flags, and rollbacks; they expect code review to become largely performative as AI improves.
- Critics respond that tests can’t cover all behavior, subtle vulnerabilities can slip through, and someone must still be accountable when production fails.
Where AI coding works today (and where it doesn’t)
- Consensus: agentic coding is powerful for solo devs, side projects, prototypes, and new “AI-ready” repos (good docs, tests, observability).
- Established, complex codebases with deep domain rules are seen as much harder: long feedback cycles, higher risk of regressions, and difficult context provision.
- Some foresee new AI-native systems eventually replacing legacy code; others think that’s far off or impractical.
Genetic algorithms / random code fantasy
- One subthread explores generating random binary or program text and selecting purely on observed behavior, likening it to evolution.
- Multiple commenters note this is essentially genetic algorithms and argue it’s wildly inefficient given the astronomical search space, discrete program state, and incomplete specs.
- Debate extends into analogies with evolution, airplane design, and agriculture; critics stress that engineering is intentional, not unconstrained random search.
Tool comparisons and workflows
- People compare Claude Code with Cursor, Windsurf, Copilot, and GLM-based services: mixed views, but Claude Code is often praised for agentic workflows.
- Others prefer staying in their existing IDE (Goland, Emacs via ACP, etc.) and using AI as a helper, not a driver.
- Token cost and consumption are recurring concerns; aggressive agentic setups can rapidly exhaust quotas.
Hype, skills, and the craft of programming
- Several see using these tools effectively as a new, hard skill: prompting, debugging, refactoring, and maintaining a mental map of fast-changing code.
- Many criticize overconfident solo-dev narratives (“no review,” “5 years of AI coding,” percentile bragging) as buzzword-heavy and unproven on real, large systems.
- Concerns include loss of the “art” of programming, homogenized LLM writing style, difficult-to-review AI code, and open source inundated with low-quality, AI-generated PRs.
- Others report real productivity gains (e.g., weekend PoCs, RAG pipelines) and argue skeptics are under-updating on how far tools have come.