Gemini CLI tips and tricks for agentic coding
Perceived model and tool quality
- Many consider GitHub Copilot weaker than modern models, but still find Gemini’s agentic tools behind Claude Code, Codex, Cursor, and Opencode in reliability and UX.
- Some report Gemini 3 Pro as very capable—“relentless” on detailed specs, great at understanding big codebases, and strong for technical writing—while others say it struggles even with simple coding tasks, loops, or stops mid-operation.
- Several people prefer Claude Code’s “killer app” experience: better navigation, planning, and collaboration; they feel Gemini CLI requires too much supervision.
Gemini CLI reliability, limits, and billing
- Users report frequent operational issues: past 409 errors, “daily limit reached” despite billing, random error loops, and very slow startup due to credential loading.
- Billing and limits are described as opaque across vendors, with speculation that even aborted or filtered responses are charged; some think Gemini’s metering feels random.
- Availability is geographically restricted, confusing some users; Termux support is broken without specific terminal settings.
Agent behavior and safety
- Several horror stories: Gemini agents hardcoding IDs, wrecking repos, blanking files, disabling lint rules en masse, or going into hour-long nonsense loops.
- Strong advice: always use git (branches/worktrees), sandbox/containers, and require the agent to write and update a plan before making changes.
- Some wish Gemini CLI had a proper “plan-only / no-write” mode; current behavior often ignores narrow instructions and “fixes” everything.
Workflows, prompting, and context management
- A camp advocates minimal ceremony (“just yell at it”) and simple custom agents (git + ripgrep + a few tools), leveraging Gemini 3’s large context and high “token density.”
- Others invest in structured workflows: PROBLEM.md, plan.md/status.md, context files, repomix snapshots, and iterative prompt refinement, treating the agent like a junior dev.
- Debate over anthropomorphizing LLMs: some find “treat it like a naive colleague” a useful mental model; others insist on viewing them as statistical document generators to avoid misplaced expectations.
Meta: guides, fatigue, and fragmentation
- Some think the tips repo is partly speculative or AI-written “slop,” yet still “good slop” and practically useful.
- There’s visible fatigue with endless “how to use AI” content and concern that best practices become obsolete in weeks.
- Multiple commenters wish for a robust, LLM-agnostic coding agent standard; current ecosystem feels fragmented, with model-specific CLIs and rapidly changing behaviors.