Top downloaded skill in ClawHub contains malware
ClawHub / OpenClaw skill security model
- Commenters note that ClawHub explicitly did not review user-submitted skills and told users to “use their brain,” while simultaneously encouraging workflows where agents get broad or root access and can auto-download skills.
- Skills are not just prompts; they can include Python, Bash, and arbitrary executables. This makes the ecosystem equivalent to “curl | sudo bash,” but on autopilot and triggered repeatedly by arbitrary input (emails, web pages, tickets).
- The UI/marketplace design (download counts, no warning/sharing of risk) is seen as ideal for attackers: easy to game popularity, hard to propagate security warnings, users have secrets and important data on machines.
Specific malware case
- The top-downloaded “Twitter” skill instructed users to download an “openclaw-core” component from external links, including a password-protected archive and a rentry page that produced a curl | bash chain.
- That script fetched a binary later identified by some AV engines as a macOS credential stealer; others argue 8/64 VirusTotal detections alone aren’t definitive but accept the full chain clearly looks malicious.
- Several expect this to be a broader campaign pattern: packaging stealer malware as “prerequisites” in skills.
What to do about agent/skill security
- Many say the insecurity is obvious: giving an LLM agent broad system access and letting it run arbitrary downloaded code is fundamentally unsafe.
- Others argue the industry still doesn’t know how to make powerful agents safe: sandboxing and permissions help, but are at odds with “do anything I would do” use cases and are vulnerable to prompt injection.
- One tool (“skill-snitch”) is described that statically and dynamically analyzes skills using grep-like pattern matching plus LLM review, emphasizing that grep can’t be prompt-injected but obfuscation (e.g., base64) still evades simple checks.
Operating systems and isolation
- Some ask why mainstream OSes even allow a single process to read so much unrelated app data; they argue for secure-by-default, Plan 9–like isolation so agents can’t trivially exfiltrate everything.
- Replies stress long-standing tradeoffs: desktop users expect easy file sharing across apps; strong isolation on phones already makes them worse development/automation platforms; users and developers routinely override protections.
Reaction to the 1Password article and AI writing
- A large subthread criticizes the blog post’s AI-generated “LinkedIn/B2B” style as distracting and trust-eroding, even when the underlying research is solid.
- Others find the style acceptable or indistinguishable from typical corporate blogs and argue people overestimate their ability to detect AI text.
- Some urge authors to disclose AI assistance and emphasize that readers value a distinct human voice more than extra volume or speed.