Show HN: Stop Claude Code from forgetting everything
How the tool works & intended use cases
- Skill connects Claude Code to an external MCP server that stores past conversations in a small DB (key/value + embeddings), organized by namespaces and “hypergraph” relationships.
- On each request it:
- Embeds the current query.
- Runs semantic + time-weighted search over prior sessions.
- Returns only the top-N relevant snippets into the prompt as additional context.
- Used mainly to:
- Resume long research/coding sessions across days.
- Ask “what was I trying to do here?”, “what research threads already exist?”, “where did reasoning drift?”.
- Let Claude reflect on and critique its own past reasoning.
Comparison to Claude’s built‑ins (CLAUDE.md, agents, skills, compaction)
- Several commenters say a good CLAUDE.md, AGENTS.md, per-project docs, and checkpoints/restore are enough; they see this as duplicating what agents + skills already solve.
- Others report:
- Compaction making the model feel “dumber” and losing important edge cases.
- CLAUDE.md often being ignored or only weakly applied.
- One thread explains a hierarchy:
- CLAUDE.md → broad global/project instructions.
- Agents → narrower, language/domain-specific instructions.
- Skills → single-purpose instructions + deterministic tools (ripgrep, dependency graph analyzers, image generators), to keep context tight.
Privacy, hosting, and vendor lock‑in
- Multiple commenters say sending proprietary or sensitive code to a third‑party alpha service is a non‑starter; they want purely local or self‑hosted storage.
- Concerns include compliance, data leakage, vendor disappearance/price hikes, and negotiating agreements for “every small AI tool”.
- Some argue that even if useful, such features will eventually be best implemented by the model vendors themselves.
Alternatives and lightweight strategies
- Many describe simpler approaches:
- Repo- or user-level CLAUDE.md and AGENTS.md.
- Markdown “plans”, tickets, implementation logs, and work summaries committed to git.
- Session JSONL parsing and local search (ripgrep, Tantivy, jq, custom CLIs).
- Other memory tools: beads, claude-mem, Double, rg_history, memory-lane, custom MCP memory servers.
- Some find using less context, frequent fresh sessions, and strong planning/linting/tests more effective than elaborate memory layers.
Skepticism about memory abstractions & impact
- Repeated sentiment: there are already “countless” memory/context tools; few show benchmarks or clear productivity gains over simple docs.
- Doubts that external memory can reliably handle:
- Drift, stale state, or subtle errors accumulating over time.
- Multi-agent coordination without adding new failure modes.
- The project’s authors emphasize their focus on portability and shared state across tools/agents rather than “infinite context,” but some commenters remain unconvinced that semantic/temporal search alone solves the coordination problems they describe.