Two new Gemini models, reduced 1.5 Pro pricing, increased rate limits, and more

Pricing & Competitive Positioning

  • Major price cuts for Gemini 1.5 Pro are widely noted; input/output now significantly cheaper than GPT‑4o and Claude Sonnet (at least as initially advertised).
  • Some see this as Google leveraging TPUs and in‑house infra to undercut rivals; others question whether this is “dumping” vs just lower costs.
  • Confusion over pricing: discrepancies between Google AI Studio, Vertex AI, and third‑party platforms, plus a quietly changed output price on the docs.
  • Debate whether Google is pursuing an “Android-style” strategy: slightly worse but much cheaper model aiming for oligopoly rather than dominance.

Model Quality & Benchmarks

  • Mixed views: some say Gemini is “not as smart” as GPT‑4o/Claude and has high hallucinations and looping behavior.
  • Aider code benchmarks show the new 1.5 Pro revision roughly flat vs the previous version and lagging behind o1, Claude Sonnet, etc.
  • Others report Gemini outperforming Llama on puzzles and being very reliable for function calling, with decent price/performance for many tasks.

Developer Experience & Reliability

  • Strong criticism of the Gemini API: flaky, changing behavior, unstable safety settings, broken agent scaffolding, incorrect/outdated docs, and unannounced API changes.
  • Some say building real products on it was “futile,” even with heavy incentives and credits from Google.
  • A minority say using it via AI Studio is “decent” and praise the free quota for experimentation.

Safety Filters & Recitation Issues

  • Prior versions had aggressive safety filters that blocked benign queries (economics questions, NFL quarterback lists, some novels).
  • “Recitation” errors (blocking outputs similar to training data) are called a show‑stopper for production apps; examples include trivial prompts like “Who is Google?” or boilerplate code.
  • One commenter believes the latest models have mitigated recitation; others remain wary and call this problem a major trust breaker.
  • New release makes most safety filters opt‑in, seen as a crucial improvement.

Product Integration, Moats & Usability

  • Some think Google’s moat could be deep integration with Gmail, Drive, Maps, Android, and Workspace.
  • Actual shipped integrations (e.g., Gmail search assistant, YouTube video summaries) are described as slow, inaccurate, or borderline useless, undermining that moat narrative.
  • Concerns that AI inference costs and Google’s “ship something fast” culture lead to half‑baked, resource‑constrained features.

Privacy & Data Use

  • Confusion over data privacy: consumer Gemini often uses data for training, while paid API and certain enterprise/Vertex offerings explicitly do not.
  • Some argue only on‑prem/local LLMs truly avoid leakage risk; others counter that regulated, BAA/HIPAA‑style cloud setups are “private enough” for most.

Code Assist & Tooling

  • Gemini Code Assist is widely judged inferior to GitHub Copilot and Claude‑powered tools (e.g., Cursor, Aider) in speed and usefulness.
  • A few users still find Gemini Pro strong for targeted, complex coding tasks via generic IDE extensions.