Google Antigravity

What Antigravity Actually Is

  • Widely recognized as a minimally customized fork of VS Code / Electron, with an “agents” pane and Gemini integration layered on.
  • Website and blog largely avoid saying “VS Code”; some see that as disrespectful to the upstream work.
  • Supports multiple models (Gemini 3 Pro high/low, Claude Sonnet 4.5, GPT-OSS 120B), not just Gemini.

VS Code Fork Explosion

  • Many see this as “yet another AI IDE that’s just VS Code,” alongside Cursor, Windsurf, Lovable, etc.
  • Debate over why these aren’t just extensions:
    • One side: Microsoft gatekeeps deeper APIs for Copilot; forks allow tighter integration and avoidance of MS control.
    • Other side: fragmentation is needless; a common “AI-enabled” fork or open interfaces would be better.
  • Some praise truly original editors like Zed or JetBrains IDEs as higher-quality alternatives.

Launch Quality & UX Issues

  • Numerous reports of:
    • Blank page or MIME-type errors in Firefox; broken/mobile scrolling that feels “nauseating.”
    • Mac and Linux startup failures, crashes, and extreme slowness; fans spinning hard.
    • “Setting up your account” spinner that never completes, especially for Workspace accounts.
  • Website criticized for:
    • Almost no product screenshots at first, heavy marketing language, and odd scroll hijacking.

Trust, Longevity & Lock‑in

  • Strong skepticism about investing in a Google IDE due to the company’s history of killing products and internal incentives favoring launches over maintenance.
  • Concerns about:
    • Account bans locking users out of tools.
    • Data collection/telemetry and training on user code (especially for free tiers).
    • No Vertex / enterprise integration yet; Workspace accounts initially unsupported.
  • Some expect Antigravity to be short-lived or primarily a promotion vehicle.

“Agentic Development” Reactions

  • Marketing pitch: developers become “managers of agents,” focusing on architecture and tasks, not implementation.
  • Many engineers find this framing unappealing or dystopian; likened to low/no‑code hype:
    • Real bottleneck is specifying requirements and handling edge cases, not just cranking out code.
    • Fear of future systems where nobody understands the codebase, cruft explodes, and agents continually patch over issues.
  • Others argue agents can:
    • Summarize architectures, explain code, and accelerate onboarding.
    • Automate GUI testing via browser control, a genuine pain point.

Pricing, Quotas & Access

  • Free “generous” preview limits felt extremely tight:
    • Users hit “model quota exceeded” or “provider overload” after minutes or a couple of prompts, often on first real task.
    • Confusing error messages (quota vs global overload) and no clear path to pay for higher limits or BYO API keys.
  • This undermines confidence and makes it hard to evaluate Gemini 3 Pro inside the IDE.

Comparisons to Existing Tools

  • Frequent comparisons to:
    • Cursor / Codex / Claude Code / Opencode, where many already have stable workflows.
    • Firebase Studio, IDX, Jules, Gemini CLI—other overlapping Google efforts.
  • Some feel Antigravity adds a useful centralized Agent Manager (multi‑workspace, task inbox, inline comments routed to agents).
  • Others see no compelling advantage over “VS Code + Claude/Codex/Gemini via plugins or CLI.”

Branding, Hype & Tone

  • “Antigravity” name seen as overblown, misleading, or an xkcd in‑joke; five syllables considered clumsy.
  • “Agentic” has become a buzzword that many find grating; marketing copy about “trust” and “new eras” read as hype‑driven.
  • Several note the blog focuses on Google’s vision and internal narrative rather than concrete user benefits.

Early Hands‑On Impressions

  • Positive:
    • Some users genuinely like the workflow: plan docs, inline comments, browser automation, and unified Agent Manager make multi-agent work more coherent.
    • Tab completion and UI for iterating on a plan are praised by a subset of testers.
  • Negative:
    • Others report Gemini 3 performing worse than Claude or GPT-based tools on real tasks, going off on tangents or declaring tasks “done” when they aren’t.
    • Bugs (rate limits, crashes, broken Vim mode, odd windows, MCP issues) make it feel like a rushed, “vibe‑coded” beta.
  • Overall sentiment: interesting ideas, but marred by execution problems, unclear quotas, and deep distrust of Google’s long‑term commitment.