Clawdbot - open source personal AI assistant
Setup, UX, and Stability
- Several users report rough onboarding: confusing npm warnings, OAuth pain, package-manager choices, WhatsApp/Signal build-time deps, and many open GitHub issues.
- Some found the app initially “broken” (lost context, failed reminders, buggy integrations) and uninstalled; others say it becomes “so‑so” to “startlingly good” after tinkering.
- Token use and slowness (especially via aggregators like z.ai) were common complaints.
Capabilities and “Wow” Moments
- Key differentiator for fans: a persistent agent that can initiate actions, schedule cron-like tasks, and message you proactively over Telegram/WhatsApp/Slack/Discord.
- Reported uses:
- Household reminders and daily family schedules across multiple calendars.
- Monitoring HN or websites and pushing notable threads/changes.
- Landlord-style tenant screening and visit scheduling via FB Messenger.
- Home automation (MQTT/Tasmota, Hue), Plex DVR control (including self-extending its Plex skill), GA4 analytics checks, and email-driven finance summaries.
- The “it builds new skills for itself and then reuses them later” dynamic is what many say finally makes agents “click.”
Comparison to Claude Code and Other Tools
- Common view: conceptually just “Claude Code + tools + chat gateway,” not fundamentally new; experienced users say they can already vibe-code similar flows with MCP, Telegram bots, etc.
- Supporters argue bundling, proactive loops, and always-on presence make it qualitatively different from ad‑hoc LLM chats.
Cost and Token Efficiency
- Multiple reports of extreme token burn (tens of thousands of tokens per session; hundreds of dollars in days) unless carefully tuned.
- Some rely on Claude Max-style subscriptions or local models; others see this as a reason to roll their own, slimmer agents.
Security, Permissions, and Prompt Injection
- Major recurring concern: running a tool-enabled agent with broad desktop/account access, often as root, sometimes directly exposed to the internet.
- Prompt injection is highlighted as fundamentally unsolved; Clawdbot’s web tools apparently feed untagged external text straight into prompts.
- Examples include: leaked config tokens, hard-coded OAuth client secrets in extensions, and AI-generated security reports listing many high‑risk issues.
- Several recommend strict sandboxing/VMs, read-only mounts, allowlists, and treating it like an untrusted contractor at best; others admit almost nobody actually runs it that cautiously.
Hype, Trust, and Alternatives
- Many perceive heavy, possibly manufactured hype (Twitter, YouTube, a meme around buying a Mac Mini, a third-party crypto token).
- Some see it as the “ChatGPT moment” for personal agents; skeptics call it “AI slop,” productivity theater, or trivial glue code.
- A number of commenters are building similar personal assistants themselves and prefer bespoke, narrower, or fully local solutions.