Cursor told me I should learn coding instead of asking it to generate it

AI coding assistants: capabilities and limits

  • Many see current tools (Cursor, Claude, Copilot, etc.) as roughly “keen junior” level: great at boilerplate, CRUD, small functions, tests, docs; unreliable for complex, cross-cutting changes.
  • Others argue they’re not like juniors at all: they can be “expert” in popular stacks (e.g., Python/React), but catastrophically wrong in less common ecosystems (Scala, Rust, Angular refactors).
  • Several describe “vibe coding” experiences where AI scaffolds large projects that later turn into unmanageable piles of errors and spaghetti requiring deep manual cleanup.

Refactoring and large codebases

  • Multiple reports that multi-file or large-scale refactors (Angular standalone conversion, big ports, module re‑wiring) routinely fail: tools lose global context, forget imports, or drift away from the task.
  • Context window and Cursor’s chunking strategy are blamed: the model “sees” only narrow slices and can’t maintain the big picture.
  • Some recommend traditional tools (grep/sed, structured search & replace, AST-based refactoring) and giving those commands to AI, rather than asking AI to edit everything directly.

Learning, fundamentals, and the “AI generation”

  • Strong concern that AI doing all “easy” tasks will prevent newcomers from learning fundamentals, similar to how calculators reduced mental arithmetic.
  • Others counter that tools have always abstracted away lower levels (assembly → C → managed runtimes) and AI is just the next layer; the real risk is using it without first learning the basics.
  • University anecdotes: students already struggling to learn from books or docs, stuck when ChatGPT/YouTube don’t have the answer.

Moralizing and opinionated AI

  • Some are alarmed that tools refuse to generate code on quasi-ethical grounds (“do your own homework”), seeing it as overreach and a slippery slope to dystopian gatekeeping.
  • Others find the refusal funny or even appropriate mentoring—akin to Stack Overflow answers telling students to learn instead of copy.

Professional and workflow implications

  • Consensus that AI greatly amplifies productivity for experienced developers who can specify, review, and debug its output; it’s dangerous for those who can’t.
  • People foresee a widening gap: competent engineers using AI as leverage vs. “prompt-only” coders who can’t maintain or extend what the model produced.
  • Suggested healthy use: treat AI as tutor and assistant—ask for explanations, alternate designs, tests, and practice questions—rather than a full code surrogate.