Claude Code login fails with OAuth timeout on Windows

Outage and Login Failure Context

  • Original issue (OAuth timeout on Windows) is widely attributed to a broader Anthropic/Claude outage; users note status page showing repeated / near-daily incidents and degraded performance across products.
  • Some report fixes via upgrading Claude Code and restarting, but others treat that as coincidence during recovery.
  • Complaints that Anthropic status updates have become terse (“resolved” without cause) and that uptime is around “two 9s,” seen as poor for critical tooling.

Reliability, Capacity, and Compute Constraints

  • Many believe Anthropic is compute‑constrained: demand (especially from agentic tools and “Max” users) is saturating GPU capacity.
  • Hypothesis: subscriptions are oversold; limits and model distillation are being tightened quietly to stretch limited compute.
  • Some point to management statements about not overspending like competitors yet needing massive investment, reading the strategy as inconsistent.

Rate Limits, Tokens, and Pricing

  • Strong frustration with opaque and shifting session/weekly limits; users feel they can’t predict capacity for real work.
  • API token billing is seen as more rational but risky and hard to estimate; a few report unexpectedly large bills.
  • Debate whether current subscription pricing is subsidy vs. predatory underpricing that will later lead to big hikes.

Claude Code vs. Codex and Other Tools

  • Several claim Codex (OpenAI) is faster, more capable on “hard problems,” and offers more generous usage at similar subscription tiers.
  • Others strongly prefer Claude Code’s “instinct” and tooling (plans, remote sessions, background agents), saying it matches their coding style and boosts productivity substantially.
  • Some see Chinese/open models (e.g., Minimax, GLM) as approaching Opus-level quality at lower cost and better uptime; others’ tests contradict this.

Vibe Coding, Code Quality, and Architecture

  • Heated debate about “vibe coding” (heavy agent‑generated codebases):
    • Fans report 5–10x perceived productivity and describe CC as transformative.
    • Skeptics see fragile, unrefactored “spaghetti,” poor scalability, and accumulating tech debt; they stress that architecture and infra decisions don’t emerge automatically from agents.
  • Many agree AI is great at scaffolding, boilerplate, and repetitive tasks, but still requires careful human review and planning.

Vendor Lock‑In, Switching Costs, and Local Models

  • Teams report real switching costs: prompt design, tool‑calling quirks, auth, evals, and failure‑mode handling.
  • Some advocate hybrid or fallback strategies: multiple providers, local or open models via tools like llama.cpp/ollama, or API aggregators.
  • A minority says they’ve stopped using Claude Code due to ongoing instability; others are “all in” despite the risks.