Claude Code on the web

New capabilities & UX impressions

  • Many see Claude Code on the Web as a polished UI over the CLI (“claude --dangerously-skip-permissions”), with seamless handoff via claude --teleport session_... into a local branch.
  • Web + iOS support is appreciated, especially for quickly kicking off tasks or checking on long-running sessions from a phone. Some early users report bugs and hangs (e.g., yarn install), and odd auto-generated session titles.
  • Features people like from CLI (plan mode, rollbacks, approvals, agents, skills) are seen as core to the value; several want these fully preserved in the web flow and better integrated with MCP tools.

Sandboxing, security & environments

  • Anthropic’s open‑sourced native sandbox (macOS-focused, no containers) is widely discussed; some praise its power, others worry about allowlists that include domains which can still exfiltrate data.
  • Clarified patterns: macOS sandbox-exec vs more robust Endpoint/Network Extensions; HTTP proxy allowlists; possibility of “no network” containers.
  • Constraints: ~12GB RAM but no Docker/Podman; testcontainers and multi-service setups are often impossible. Users request easier full-network mode, nix-style hashed fetches, or pluggable own-sandbox backends.

Mobile, platforms & authentication

  • Strong frustration that iOS keeps shipping first, with Android lagging or absent. Debate centers on U.S. vs global share, Android monetization, technical fragmentation, and Anthropic–Apple ties.
  • Some want plain username/password or passkeys; magic links and email-based MFA are seen as workflow killers in privacy-focused browsers.

Workflow fit: inner loop vs PR agents

  • Split between people excited by “fire-and-forget” agents that open PRs and those who insist AI must live inside the inner dev loop (Cursor/VS Code Remote, SSH) for rapid, local iteration and inspection.
  • Concerns that opaque remote sandboxes, auto-PRs, and noisy Git activity make review harder and encourage under‑reviewed merges.

Comparisons with Codex & other tools

  • Massive subthread compares Claude Code (often Sonnet 4.5) with OpenAI’s Codex (GPT‑5 Codex).
  • Rough consensus in that discussion:
    • Claude Code: best-in-class UX, permission model, planning, “pair programmer” feel, less over‑engineering, better day‑to‑day ops and fast iteration.
    • Codex: stronger for long-horizon, high-stakes, multi-file or architectural changes; more likely to grind through truly hard problems when left alone, but sometimes overcomplicates or “skips steps”.
  • Experiences are sharply split: some say Codex has completely eclipsed Claude and moved large spend over; others report Codex hallucinating bugs, failing simple tasks, or being unusable in their stack, while Claude remains more dependable. Many run hybrid setups (e.g., Claude as harness, Codex via tools; or Amp-style combinations of Sonnet + GPT‑5).

Quality drift, limits & trust

  • Several users report that Claude quality and/or usage limits have worsened over time (especially Opus access on Max), suspecting cost optimization; others say they’ve seen no throttling even with heavy Claude Code use.
  • There is visible anxiety about Anthropic’s long-term competitiveness vs OpenAI, and one commenter says Anthropic has “lost my trust” without elaboration.
  • Some accuse pro‑Codex comments of being astroturfing; others push back, noting similar experiences and the difficulty of proving claims without sharing proprietary prompts/tasks.

Other ecosystem & integration gaps

  • Requests: API-backed web CC, Bedrock support, GitHub Actions-style interactive UI, GitLab/Azure DevOps support, better GitHub permissions (read-only + manual pull instead of full write).
  • Alternatives mentioned include Happy/Happy Coder, Ona, Sculptor, Amp Code, OpenCode, Zed + Codex, and various custom setups.

Impact on developers

  • Mixed emotions: some describe shipping large applications in days and “productivity exploding”; others feel fun and craftsmanship eroding, or worry about job displacement (“maybe 30% of developers”).
  • One camp sees AI as a 2–3x multiplier that should expand backlogs and hiring; another notes that many executives mainly frame it as a cost-reduction lever.