Claude Code Routines
Feature & Use Cases
- Routines let Claude Code run tasks on a schedule, via callbacks, or on GitHub events.
- Users report successful workflows: PR review, Slack/email/GitHub digests, feedback triage, simple automation around repos.
- Some see it as Anthropic absorbing “OpenClaw-style” cron + hooks; others liken it to n8n / GitHub Actions but LLM-driven.
- Several say it’s easy to replicate with cron + scripts, so the feature is more about convenience and hosting than raw capability.
Usage Limits, Pricing & ToS Ambiguity
- Strong confusion around what’s allowed on the fixed-price subscription:
- Is
claude -pallowed in scripts, bots, IDEs, or only direct human use? - When does a personal script become a “third‑party harness”?
- Is
- Reports of accounts banned for scripted CLI use, with little recourse.
- Routines on Max include a small number of “free” runs per day, then bill per-token, which some see as constraining and opaque.
- Many perceive shifting limits mid‑subscription as bait‑and‑switch; others see it as a compute‑capacity reaction.
Model Quality, Context & “Nerfing”
- Multiple users feel Claude (especially Opus) has become less reliable, more verbose, more error‑prone in coding.
- Others still find it excellent, suggesting possible A/B tests, routing differences, or expectation drift.
- The 1M-token context is widely blamed for token bloat, higher costs, and quality regressions; people manually cap context to ~200k.
- There’s debate whether 1M vs 200k context variants differ in quality below 200k tokens; outcome is unclear.
Lock‑In, Platform Strategy & Trust
- Many view Routines as another step toward vendor lock‑in and “AI cloud” platform economics, not just model access.
- Strong reluctance to depend on opaque, changing features (Routines, Skills, Cowork) that could be nerfed, sunset, or repriced.
- Comparisons to cloud lock‑in (AWS Lambda et al.); several prefer keeping orchestration under their own control.
Alternatives, Reliability & Broader Sentiment
- Frequent mentions of alternatives (OpenClaw, GitHub Agentic Workflows, Codex, local/open models, custom orchestrators).
- Complaints that Anthropic ships overlapping, sometimes buggy features while core issues (context bloat, CLI regressions, flaky scheduling) persist.
- Growing frustration with rapid “feature velocity,” unclear policies, and perceived enshittification, even from long‑time fans.