JetBrains IDEs Go AI: Coding Agent, Smarter Assistance, Free Tier
Feature Set & Quality vs Other Tools
- Several users compare JetBrains AI / Junie to Cursor, Claude Code, Copilot, Continue.dev, Windsurf, etc.
- Junie as an agent is generally seen as decent, for some “better than Copilot/Continue” and good for scaffolding; for others slower and weaker than Cursor/Windsurf.
- Autocomplete is widely viewed as “anemic” and lacking features like Next Edit Prediction / “tab-tab-tab” style flows; some are building plugins to fill this gap, and JetBrains hints such features are coming.
- One user reports Junie replaced their Claude Code and Cursor usage, with fewer destructive rewrites, but complains about loss of context between messages.
- Complaints about Claude Code and Cursor include cost, hallucinated “demo” entry points, and breaking existing functionality.
Models, Benchmarks & Product Availability
- Junie is described as powered by Anthropic Claude and OpenAI models; AI Assistant supports Claude 3.7 Sonnet and Gemini 2.5 Pro.
- A SWEBench Verified score of 53.6% is mentioned; some consider this unimpressive compared to other models and note the result isn’t listed on the official SWEBench Verified page.
- Junie is currently available only in some IDEs (IntelliJ, PyCharm, WebStorm, GoLand); Rider and others lag due to architectural differences (e.g., ReSharper integration).
Pricing, Tiers, Credits & Bundling
- New unified subscription: AI Free, AI Pro, AI Ultimate.
- AI Free: unlimited code completion, local models, and credit-based cloud assistance/Junie, but not available in Community Editions of PyCharm/IntelliJ.
- All Products Pack and dotUltimate now include AI Pro; some users pleasantly surprised, others suspect it indicates weak standalone AI sales and foresee eventual price hikes.
- Confusion around “credits”: they correspond to token-limited cloud usage; details are still being clarified. Some dislike token- or credit-based billing due to anxiety over invisible consumption.
- No obvious way to pay for overages; hitting limits just disables cloud usage.
Local Models, “Offline” Use & Data Policies
- Users can connect local models via Ollama or LM Studio in the free tier.
- However, the assistant currently requires online access to JetBrains AI servers even for local models; it refuses to start chats when blocked at the network level.
- JetBrains’ own docs say “offline” mode prevents most remote calls but rare cloud usage may still occur, which some find unsettling for privacy-sensitive use.
- JetBrains claims strict contracts with providers: data cannot be used for training and is limited to validating requests.
Enabling/Disabling AI & Educational Concerns
- AI features are opt-in; there is also a
.noaiproject file that fully disables AI assistant features for that project. - This is important for teachers who want to prevent accidental/autocomplete-based “vibe coding” for students, though they acknowledge determined students can delete
.noai. - Some worry that ubiquitous built-in AI will encourage cheating and degrade learning; others note cheating predated AI.
Performance, UX & Bugs
- Some report heavy resource use (fans spinning, slowness) and Junie being very slow, possibly due to first-wave load.
- One noted bug: generated patches include meta text inside code (“the provided snippet is a modification…”) breaking compilation.
- Complaints that “codebase off” still leads to many random files being attached, slowing requests.
Attitudes Toward AI & JetBrains Strategy
- A vocal subset dislikes AI entirely, preferring editors like (neo)vim or non-AI-focused tools, and resent paying for AI development indirectly.
- Others argue JetBrains must match VS Code + Copilot to stay competitive, but appreciate that AI can still be disabled.
- Debate over proprietary vs FOSS tooling: some prefer fully FOSS to avoid vendor lock-in; others counter that open-source IDEs tend to stagnate and that JetBrains’ longevity is a point in its favor.
- One commenter claims product quality declined after the Ukraine war due to staff moves; others strongly dispute this and report stable or improved quality.