Pairing with Claude Code to rebuild my startup's website

LLM coding workflows & context management

  • Several commenters advocate aggressively trimming/clearing context between phases (research → plan → implementation → review), often storing intermediate state in research.md, project.md, plan.md, etc., then reloading as needed.
  • Others report success with very long-running chats, relying on auto-summarization/compression and only restarting when performance degrades.
  • Some find multi-agent “role” setups (researcher, architect, planner, implementer, reviewer) and folder‑scoped terminals effective; others say this is overkill and feels like managing a dev team rather than writing code.
  • There is disagreement over the value of “you are an expert engineer”–style role prompts: some say they still help, others say modern models already behave that way and such prompts are redundant.

Productivity vs micromanagement

  • Critics argue that heavy prompting, planning, and context choreography looks like more work than directly editing code.
  • Proponents counter that on sufficiently large/covered codebases, LLMs act like constraint solvers guided by tests and can cut work from “8 hours to 2,” especially when parallelizing multiple agents.
  • One detailed case study describes rebuilding a large WordPress site faster and better than the original team using agents, claiming a clear productivity win.
  • Others note that agent workflows still suffer from “context rot,” messy CSS/layout, and poor separation of concerns, creating long‑term maintenance headaches.

Trust, safety, and codebase access

  • Several people prefer keeping LLMs away from full repos, instead pasting/selecting only relevant files to avoid hidden, hard‑to‑diagnose bugs.
  • Others accept repo‑wide access but emphasize constant sanity‑checking, staged commits, and good version control as a safety net.
  • There’s concern that LLMs confidently say code is “production ready” while actually drifting off-task once context is compressed.

Tooling and model comparisons

  • Claude Code is praised for polish, planning mode, context compression, and resilience to API rate limits; some users even route it to non‑Anthropic models (e.g., Cerebras/Qwen) for speed.
  • Codex (OpenAI’s agent) is described by one user as dramatically more effective and less verbose than Claude for web app work.
  • Cursor, Zed, aider, Cline, Opencode, and others are mentioned; experiences vary widely by workflow and expectations.

Landing page UX and product positioning

  • Multiple comments critique the startup’s landing page: hidden scroll affordance, mobile layout quirks, and an “AI‑ish” emoji-heavy section.
  • Some argue the marketing is overdone for a technical audience and lacks concrete explanations, demos, and methodology for the simulation product.
  • There’s skepticism that technically capable teams will adopt a SaaS simulation tool rather than build their own, but also recognition that robust simulations are hard and may justify specialized products.

Broader attitudes toward AI tools

  • One thread frames LLM use as a “pay‑to‑play management sim,” likening token pricing to arcade tokens and electricity; others push back or lean into the “agent management” metaphor.
  • Several participants stress “proceed with caution”: AI can accelerate work but still needs strong human oversight, especially on production code.
  • Debate emerges over whether time spent learning prompt/agent tricks is an investment in future productivity or largely ephemeral “LLM whisperer” lore that will be obsolete as tools mature.
  • Some worry about over‑reliance on AI versus developing one’s own planning and reasoning; others are comfortable treating LLMs as everyday tools despite their flaws.