Claude.ai down
Outage characteristics
- Some users report Claude.ai and certain models (e.g., Opus 4.6, especially 1M context) returning consistent 500 errors; others say everything works, particularly via API or different models (e.g., Sonnet 4.5/4.6).
- Several note the issue appears model-specific and more visible on claude.ai than via API/CLI.
- Duration seems short for many (5–10 minutes), but there’s frustration that such incidents are now frequent, especially around US West Coast mornings; some call it a “Monday morning ritual.”
- Status page lag and classification are criticized: users see clear outages or mosaics of incidents while the page still claims “all systems operational” or misses shorter degradations.
Reliability and “number of nines”
- Multiple comments mock Anthropic’s uptime as “a single 9” (90‑something percent) instead of typical multi‑nine targets.
- Some argue that building businesses atop LLM APIs with 0–2 “nines” is risky and operationally painful.
- Others say this is what product–market fit looks like: users keep paying and relying on the tool despite downtime.
Status pages, SaaS, and infrastructure
- Discussion of why status pages are often outsourced: they must stay up under spike traffic when the main service is down.
- Some say status-page SaaS is one of the few SaaS categories that clearly makes sense; others argue hosting a static, externally served page is simple enough.
- This is used as a counterpoint to claims that “SaaS is dead due to AI”: functionality is easy; reliability, ops, and edge cases are what you pay for.
Dependence on AI tools
- Many admit outages briefly halt or slow their work; some simply switch to other providers (Codex, GPT-based tools, other LLMs) or do tasks manually.
- There’s concern that developers and teams may be “deskilling,” becoming unable to function efficiently without AI assistants.
- Others insist AI is just a productivity tool; they can still code manually, but expectations have shifted (one person now doing “an entire team’s” work).
Business risk and platform dependence
- Several compare this to 2010s “programmable web” and cloud lessons: building businesses tightly coupled to third‑party APIs (Twitter, Facebook, etc.) often ended badly when terms or reliability changed.
- Some argue dependence can still be rational if cost savings (e.g., firing half the staff, automating with AI) outweigh downtime losses.
- Others highlight competitive risk: being down when a shared provider fails can push customers to competitors that run differently.
Open models and self‑hosting
- A few report success moving off cloud AI to self‑hosted open models on rented GPUs, claiming faster, cheaper, and more controllable workflows.
- Others push back that open models still lag frontier models so much that they can be slower in practice due to lower quality and more iterations.
- There is cautious optimism that improving open‑weight models could mitigate dependency on a small set of frontier‑model providers, though operational downtime will remain challenging at frontier scales.
User sentiment toward Anthropic/Claude and Mythos
- Mix of affection and frustration: people like Claude’s capabilities and context window but are annoyed by repeated outages and perceived poor uptime for a “flagship AI company.”
- Some praise Anthropic’s relative honesty in reporting issues; others call them one of the least transparent providers, claiming many degradations never show up on the status page.
- Mythos, Anthropic’s internal tool, is a frequent target of jokes: suggestions that it “went too hard” and broke things, or that if it can “fix all bugs,” what’s left to bring the service down.