Google API keys weren't secrets, but then Gemini changed the rules
Perceived AI-Generated Style of the Article
- Many commenters suspect the write-up was at least heavily edited by an LLM, citing:
- Very tight structure (“The Problem”, “What You Should Do Right Now”), highly consistent cadence, and polished “average corporate” tone.
- Overuse of patterns associated with LLMs: dramatic one-line paragraphs, “rule of three” punchy repetitions (“No warning. No confirmation dialog. No email notification.”), “not X, but Y” constructions, and scenario vignettes.
- Others push back, arguing:
- These are standard writing techniques (e.g., the rule of three) taught to humans; good structure ≠ AI.
- Human + LLM collaboration is plausible, but reliable detection from style alone is dubious and can unfairly discredit competent writers.
- Several note an “uncanny valley”: individually normal devices, but in such concentration that the overall texture feels synthetic.
How the Gemini Key Issue Works
- Historically, Google documented many API keys (e.g., Maps, Firebase) as not secrets—essentially project/billing identifiers meant to be public, often protected only by HTTP referrer/domain restrictions.
- When Gemini (Generative Language API) is enabled on a GCP project:
- Existing API keys in that project silently gain Gemini access.
- Keys that were intentionally embedded in client-side code now become credentials for a high-value, data-bearing API.
- Debate clarifies: Gemini is not enabled by default on projects, but once enabled, it is effectively enabled on all existing keys in that project unless explicitly restricted.
Security & Billing Consequences
- This “retroactive privilege expansion” allows:
- Access to Gemini uploads, cached content, and context via keys that may already be widely scraped.
- Potentially huge, unintended bills; users report five-figure charges from stolen Gemini keys.
- Earlier risks (running up Maps usage) existed, but LLM calls are:
- Much costlier per request.
- Directly usable as an AI backend for attackers’ own apps, not just for showing maps.
- Google’s proposed mitigations (e.g., blocking “leaked” keys) are seen as incomplete:
- Many keys were never meant to be secret, so calling them “leaked” is misleading.
- A clean fix likely requires stripping Gemini access from vast numbers of keys, breaking workflows.
Google’s Processes, Responsibility, and Disclosure
- Commenters are shocked such a basic design flaw passed security review, especially at a company known for strong security.
- Hypotheses include:
- Organizational complexity and siloing (“left hand doesn’t know what right hand is doing”).
- Pressure to rapidly boost Gemini adoption and usage metrics.
- Some question whether publishing while Google is still “working on it” is responsible; others say:
- Exploitation is already happening; public disclosure is needed so customers can audit and revoke keys.
- The more troubling fact is that users are learning this from a third party, not from Google directly.
Key Design, Legal/Consumer Angles, and Best Practices
- Core design error highlighted: public, non-secret identifiers should never later become secrets with access to private data.
- Analogy drawn to SSNs: originally identifiers, later (mis)used as auth secrets, creating long-term risk.
- Lack of hard, enforced spending caps on GCP/Gemini is heavily criticized:
- Compared unfavorably to other AI providers that allow pre-paid or hard limits per key.
- Some predict regulatory scrutiny, especially in the EU, given parallels to “bill shock” in telecom.
- Suggested mitigations and lessons:
- Require explicit, per-key opt-in for sensitive APIs like Gemini; do not auto-expand scopes of old keys.
- Prefer separate GCP projects or at least tightly scoped keys for public vs. internal services, despite quota and UX friction.
- Restrict client-exposed keys by referrer and API, or proxy requests through a backend if true secrecy is needed.
- Avoid uploading sensitive documents to LLMs given how brittle surrounding security and billing controls can be.