Gemini 2.0 is now available to everyone

Assistant integration and basic functionality

  • Several Android users say Gemini’s replacement of the old Assistant was a major regression: at launch it couldn’t do home control, TV control, or alarms/timers, which were the main real-world uses.
  • Some report that those basics now work reliably and may even be more consistent than before, but they see the initial rollout as a project‑management failure and an example of sacrificing systems engineering rigor for speed.
  • Others question the point of an assistant that isn’t fully reliable on routine tasks; if you must double‑check, you may as well not use it.

Model quality and coding performance

  • Reactions are sharply mixed. Some users call Gemini 2.0 Pro Experimental their favorite general “thinking/writing/research” model, on par with or slightly better than leading competitors for non‑coding tasks.
  • For coding and bug-finding, several reports say Gemini 2.x lags behind DeepSeek R1, Claude, and o3-mini-high, with higher hallucination rates and weaker code review.
  • Others praise Gemini 2.0 Flash for multimodal work (documents, object localization, PDF parsing) and see it as very strong for vision and text+image at its price.

Large context windows and RAG

  • The 2M‑token window (and practical 800k+ tests) is seen as a potential “RAG killer” for many use cases: entire books, large codebases or config dumps can be dropped in directly.
  • Some users confirm it handles long, dense documents much better than earlier or rival models; others say error rates still rise with more context and argue RAG remains worthwhile even when everything fits.

Product lineup, naming, and UX confusion

  • Many complain about Google’s “Googley” fragmentation: Gemini app vs AI Studio vs Vertex, two different “Studios,” many near‑identical model names, and overlapping “experimental/preview” labels.
  • Workspace users in particular feel like second‑class citizens: unclear what “Gemini Advanced” actually runs, inconsistent access to 2.0 models, no model switcher, missing features like Deep Research, and frequent feature‑flag weirdness.
  • The proliferation of similarly named models (2.0, Pro, Pro Experimental, Flash, Flash Lite, Flash Thinking, etc.) makes it hard to build a mental model or pick the “right” one.

Pricing and free tier

  • The generous free API quotas and low prices (especially for Flash and Flash Lite, and for PDF/audio use) are widely praised; some say it’s now the best value for document parsing and multimodal tasks.
  • Free search tool calls (up to ~1,500/day) are highlighted as a notable perk.

Safety, politics, and censorship behavior

  • Voice chat’s “no politics” policy is a major flashpoint. Users report it refusing to continue even innocuous conversations that merely mention politicians’ names (e.g., in a recipe context).
  • Some see this as dystopian and infantilizing; others argue a hard “no politics” rule avoids endless outrage cycles and alignment fights, though the current trigger behavior is considered over‑tight.
  • Several note Gemini feels more censored/hesitant than some competitors, especially in the consumer app.

Trust, data, and terms of use

  • A subset of commenters say their interest is effectively zero because they no longer trust Google as a steward of data or products.
  • Others worry about the requirement to log in with a Google account and the implied cross‑correlation of activity.
  • The ToS clause forbidding use of Gemini to develop competing models is seen as off‑putting: some say they’ll ignore it, others fear quiet account bans.

Availability, apps, and missing capabilities

  • There is a web chat app and mobile app, plus AI Studio for direct model access, but users complain multimodal output and video-file input remain gated or unclear.
  • Some report practical failures (e.g., mis-scheduled calendar events, truncated long text input) that reinforce perceptions that Gemini is still behind the best alternatives.