Claude Memory

Scope and relation to Claude Code

  • Confusion over whether Claude Code already “has memory”: some see CLAUDE.md and skills as a memory system; others argue real memory means automatic, selective remembering/forgetting across chats.
  • New feature is seen as analogous to ChatGPT’s account-level memory and to “workspaces” / “projects” that have their own persistent pre-prompts.
  • Projects are described as having separate memory spaces, which users hope will prevent cross‑contamination between general chats and project work.

Perceived benefits

  • Users like not re‑stating environment, preferences, or personal context (OS, tools, frameworks, car model, city, skill level, etc.).
  • Memory can make troubleshooting, learning new tech, and ongoing coding projects smoother; project‑wide instructions reduce repetitive prompting.
  • Some value the “ongoing relationship” feel and style mirroring (tone, slang).

Skepticism and drawbacks

  • Many power users prefer “functional” stateless chats: hidden, auto-injected context makes behavior harder to reason about and debug.
  • Reports that memory-like features elsewhere led to noisy, irrelevant or stale facts, hallucinated memories, and reduced creativity due to “context rot.”
  • Concern that models get stuck in ruts: once an early wrong path or partial plan is in context, iterative edits often perform worse than starting a fresh chat.
  • Several note recent regressions: more tool-writing instead of direct answers, broken Claude Code behavior, and over-eager skills usage.

Privacy and data control

  • Strong push for memory to live locally; server‑side memories are compared to cloud game saves with far higher sensitivity.
  • Worries about legal exposure (“search warrants love this”), corporate data, and LLMs as de facto journals/therapists.
  • Anthropic’s docs (as quoted) promise project‑scoped memories, incognito/temporary chats, and user-visible/editable memory summaries, but some remain wary and want clearer, simpler controls and a true “anonymous mode.”

Safety, “AI psychosis,” and anthropomorphism

  • Memory is linked by some to ChatGPT “psychosis”/sycophancy: reinforcement of bad patterns and false sense of a persistent persona.
  • Others fear Anthropic’s heavy anti‑sycophancy training plus memory could amplify adversarial, paranoid behavior.
  • Debate over anthropomorphic language (“thinks”, “deceives”): some see it as harmful confusion; others as practical shorthand so long as you don’t assign personhood.
  • Example system text where Claude must explicitly tell lonely users it can’t be their primary support system is noted; opinions split between appreciating guardrails and seeing “safety” as marketing or incomplete without published evals.

Prompting and context strategies

  • Many share workflows:
    • One‑shot, high‑precision prompts; if wrong, edit and resend rather than chat back‑and‑forth.
    • Use temp/incognito chats to avoid memory contamination.
    • Use CLAUDE.md / instruction files and short, focused project prompts; keep them neither too long nor too vague.
    • Periodically start new chats to reset accumulated “garbage context.”
  • Dispute over “forget everything so far”: technically old tokens remain, but some users find such instructions empirically help steer attention away from earlier content.

Implementation questions and meta‑discussion

  • Some ask how this differs from RAG and what context/token budget it consumes; others note it’s “just more context engineering” and not fundamentally new.
  • Concerns about feature fatigue and constant tweaks (skills, memory, tools) making models feel less predictable.
  • A few note that first‑party memory layers vs open, model‑agnostic context managers (MCP tools, external DBs) are competing approaches; many are already rolling their own.