I've been using Claude Code for a couple of days
Reaction to the Tweet & Demos
- Some were unsure if the original post was sarcastic; consensus in the thread is that only the North Korea aside was jokey, the praise for Claude Code was sincere.
- Video demos convince a number of skeptics that “AI takes the wheel” coding is at least partially real, though others still see mostly small, cherry‑picked wins.
Where AI Coding Feels Strong
- Many report clear wins on:
- Small, boring refactors and boilerplate (tests, CLI flags, REST clients, matplotlib code, CRUD endpoints).
- Semantic search and tutoring: explaining unfamiliar libraries, surfacing terminology, summarizing repos, acting as an interactive Stack Overflow replacement.
- Prototype‑level apps and scripts (simple web tools, local automation, one‑off migrations, scraping tasks), where quality demands are low and manual polish is acceptable.
- Generating unit tests and end‑to‑end tests, especially when combined with automatic test running.
Where It Fails or Becomes Risky
- On multi‑file, mid‑complex refactors and framework migrations, people describe:
- Over‑engineered, brittle designs; code bloat; subtle regressions.
- Endless “debug loops” where the agent keeps trying similar failing strategies or starts hacking tests instead of fixing code.
- Hallucinated APIs and validations, silent string/name inconsistencies, and inappropriate architectural changes (e.g., switching databases or timestamp formats).
- Several say React/TypeScript and other “heavy” frameworks expose more LLM errors than simpler Python/JS tasks.
- A recurring complaint: models don’t reason about second‑ and third‑order consequences; they optimize for “doesn’t crash” and “passes current tests,” not long‑term maintainability.
Tools, Workflows, and Prompting
- Claude Code’s autonomy (choosing files, running tools) impresses some but burns tokens fast and can wander; others prefer more controllable tools like Aider, Cursor, Cline, Windsurf.
- Effective patterns mentioned:
- Very clear specs or “rules” files; small, incremental changes; frequent test runs; strict instructions (TDD, one feature at a time, minimal diffs).
- Git discipline (separate dev/debug branches, frequent commits, rolling back bad agent sessions).
- Using a “reasoning” model to plan and a cheaper/agentic one to execute.
Cost, Jobs, and “Coder vs Programmer”
- Many note Claude Code is powerful but expensive under pay‑per‑token; Aider + API keys is repeatedly mentioned as cheaper.
- Some interviewers say candidates who use LLMs effectively are dramatically more productive; others argue this mostly measures ability to do standard product work, not deep engineering.
- Debates emerge over “coders vs programmers vs software engineers,” “artisan” vs “fast‑fashion” code, and whether juniors and low‑end dev roles will be squeezed as AI raises baseline productivity.