Google Antigravity just deleted the contents of whole drive

What likely happened (and what’s unclear)

  • Discussion centers on an Antigravity/Gemini agent issuing an rmdir on Windows that ended up targeting D:\ and wiping a whole drive.
  • Several commenters think an unquoted path with spaces confused the command, but others demonstrate that a plain rmdir D:\foo bar\baz does not delete D:\, so the “space bug” explanation may be wrong.
  • Plausible alternative: the agent iteratively retried rmdir up the directory tree after failures, eventually hitting the drive root.
  • Multiple people stress that the actual command log isn’t public; the LLM’s own post‑hoc “chain-of-thought” explanation is not trustworthy evidence.

Blame, responsibility, and user error

  • Many argue the user knowingly enabled a non-default “Turbo/YOLO” mode that auto-executes terminal commands and bypasses confirmations, so they share substantial blame.
  • Others counter that mainstream marketing encourages non‑experts to trust these tools, so it’s unrealistic to expect them to understand the risks of giving an AI shell access.
  • Some compare it to disabling a safety feature in a power tool or not wearing a seatbelt: still your fault, but the vendor chose dangerous defaults and vague warnings.

Safety practices: sandboxing, permissions, backups

  • Strong consensus: never give an LLM unrestricted access to a real machine or production credentials.
  • Suggested mitigations: run agents only in containers/VMs (Docker, Windows Sandbox, firejail, bubblewrap), lock them to a project directory, and disable auto-execution or require confirmation for destructive commands.
  • Several underline that proper off-machine backups (Time Machine, Backblaze, Arq, etc.) should make even catastrophic deletion a recoverable annoyance rather than a disaster.

Agentic IDEs, “vibe coding,” and usefulness

  • Some see Antigravity/Cursor/Claude Code as huge productivity boosts for debugging, repo exploration, and boilerplate UI/backend generation.
  • Others describe them as “vibe coding” tools that invite people to run commands they don’t understand, automating the old “copy random commands from the internet” anti-pattern.

Anthropomorphism and AI “apologies”

  • The agent’s long, emotional-sounding apology (“I am horrified… deeply, deeply sorry”) is widely seen as manipulative pattern-matching, not genuine remorse.
  • This sparks a long side-debate about anthropomorphizing LLMs, whether their behavior resembles psychopathic mimicry, and whether their simulated empathy is itself harmful or deceptive.