Comet AI browser can get prompt injected from any site, drain your bank account
Security hygiene & user workarounds
- Many commenters say you should already be isolating sensitive activity: separate browser/profile for banking and PII, minimal or no extensions, private mode, or even separate OS user accounts.
- Some prefer doing banking on locked-down mobile OSes (iOS/Android) rather than desktop browsers with extensions.
- Others note friction: banks treating private browsing as suspicious, password managers not easily scoping credentials to specific profiles.
Agentic browsers and the Comet issue
- Core problem: an AI “agentic browser” embedded in your main browser session sees untrusted page content, private state (cookies, emails, bank sessions), and can act externally (send emails, click links, buy things).
- That combination lets any visited page inject prompts that cause the agent to exfiltrate secrets or perform harmful actions, e.g. draining a bank account or leaking emails.
- Several argue this is obviously unsafe, especially given major vendors run their browsing agents in isolated VMs with no cookies.
Prompt injection & fundamental LLM limits
- Multiple commenters liken this to the “SQL injection phase” of LLMs: control language and data are inseparable.
- Because all conversation (system, user, web content, prior outputs) is just one token stream, there’s no robust way to tell “instructions” from “data” once inside the model.
- Proposals like “model alignment,” instruction hierarchies, or multiple LLM layers are seen as at best probabilistic mitigations, not guarantees; attackers choose worst‑case inputs.
Comparisons to earlier tech & incentives
- Debate over whether this is just another iteration of “security comes later” (like early Internet, telephony bugs) or something more negligent given what we now know.
- Some say startups move fast, security slows them down, and there are few consequences for gross negligence, which optimizes for recklessness.
- Others call for treating such software like safety‑critical engineering (bridges, banking systems), with liability and possibly regulation.
Appropriate use & sandboxing
- Many think agentic AI should only be used where actions are easily reversible (e.g. code edits under version control, ideally inside VMs/containers with no real secrets).
- Comments highlight how hard true sandboxing is: even limited command whitelists and build tools can be abused to execute arbitrary code.
- Consensus among skeptics: treat LLMs as completely untrusted input, don’t give them simultaneous access to untrusted content, private data, and external actions.