An upgraded dev experience in Google AI Studio

Perceived shift in software development

  • Several commenters see tools like AI Studio + Gemini 2.5 Pro as the next “compiler evolution”: going from natural language / high-level specs directly to working apps.
  • Some frame it as a move from “code-on-device → run-on-device” (early days) through today’s “code-on-device → run-on-cloud” mess toward “code-on-cloud → run-on-cloud.”
  • Hope: domain experts can build tools without deep knowledge of languages or deployment; fear: this simultaneously commoditizes domain expertise and development.

Expert systems, history, and AI as a bridge

  • Some argue modern LLMs might finally realize the promise of expert systems by:
    • Capturing domain logic in structured forms (ontologies, rules) but fronted by chat/agents.
    • Letting AI be the user of complex query/knowledge systems (e.g., SPARQL, MDX), shielding humans from complexity.
  • Others push back, recalling “Expert Systems™” as a 1980s hype/failure cycle and warning about repeating history.

Cloud-centric dev and autonomy concerns

  • Enthusiasts praise remote dev environments (monorepo-style) as simpler and disposable vs painful local setups.
  • Critics see “code-on-cloud, run-on-cloud” as a threat to freedom and device ownership, increasing vendor control.

Agentic OS / Rabbit debate

  • A subset is very bullish on “AI that drives your devices directly” (e.g., RabbitOS concept) as the logical end-state: an agent that can do anything a user can do.
  • Others see Rabbit as overhyped or even scam-adjacent, question trust in its leadership, and doubt its engineering maturity.

Capabilities, context limits, and practical gaps

  • Some report Gemini handling ~50k LOC contexts well; others see hallucinations and degraded quality at large contexts.
  • Skepticism that LLMs can manage million-line, tightly coupled production systems or hard problems like DB migrations and scaling.
  • Image generation integration is viewed as promising but currently too slow for responsive apps/games.
  • Users note subtle typos and “99% correct” outputs—good enough to run, but error-prone.

Education use and cheating

  • Commenters see strong potential for new assignment types (interactive simulations, bespoke games).
  • Proposed mitigation for AI-assisted cheating: allow any tools but require in-person presentations and Q&A; scaling this to large classes is unresolved.

Product sprawl and UX confusion

  • Multiple overlapping Google offerings (AI Studio, Firebase Studio, Vertex AI Studio, Gemini Canvas, Jules, Code Assist) are seen as confusing and symptomatic of poor product management.
  • One commenter explains rough distinctions: AI Studio as a lightweight playground for Gemini APIs, Firebase Studio as a more traditional AI-assisted web IDE, Canvas as chat-plus-mini-apps, Jules as ticket-based code editing, etc.

Business model, data use, and legal concerns

  • Users worry AI Studio will eventually stop being free; some already find its responses better than standard Gemini.
  • Strong criticism of Google’s terms:
    • Training on user data and human review by default.
    • Clauses prohibiting building competing models with outputs.
    • Lack of straightforward privacy-preserving modes compared to some competitors.
  • This is framed as turning transformative tech into a “legalese-infused dystopia.”

Comparisons with other tools/providers

  • Mentions of:
    • Lovable’s git sync as a desired feature.
    • Websim as an earlier prompt-to-webapp tool.
    • Cursor for file-level integration using Gemini.
    • Grok criticized for political/ideological content; others downplay this.
    • One user reports migrating from Anthropic (Claude) to Gemini/OpenAI due to Anthropic’s weaker structured-output and API-compatibility story, despite Claude’s model quality.