Why Copilot Is Making Programmers Worse at Programming
Evidence and Speculation
- Many note the article offers no data, just plausibility arguments; some find the reasoning logical, others dismiss it as an “old man yells at cloud” rant.
- Several ask for proper studies; one link is shared suggesting worse code quality with Copilot, but others note conflicting studies even in that article.
- Consensus: current claims about long-term effects are mostly speculative.
Comparisons to Earlier Tools
- Repeated analogies to calculators, Stack Overflow, Google, IDE autocomplete, syntax highlighting, ORMs, high-level languages, and even writing vs oral culture.
- One camp says this is the same recurring panic and that the industry has always adapted.
- Others argue Copilot is different because it produces full solutions that can be used without understanding, unlike calculators that just do arithmetic.
Effects on Skills and Learning
- Concerns: erosion of core skills, less debugging practice, over-reliance on generated code, weaker fundamentals (already seen with SQL/ORMS, memory models, etc.).
- Several worry especially about students and juniors: easy cheating on basic algorithm assignments, or spending hours coercing an LLM instead of thinking through problems.
- Counterpoint: even if some low-level skills atrophy, that may be acceptable if higher-level productivity and new “AI collaboration” skills improve.
Developer Experiences with Copilot and LLMs
- Positive reports: major boosts in routine work—boilerplate, CRUD, tests, config, migrations, regex/glob snippets, new frameworks, code translation, explanation of unfamiliar code.
- Many use it as “super-autocomplete” or rubber duck: they still design solutions, then accept or edit suggestions.
- Negative reports: high error rates in anything non-trivial; debugging unfamiliar AI code can cost more time than writing it; tool can encourage bloated or repetitive code.
- Several emphasize that responsibility remains with the committer; the real risk is users blindly trusting large autogenerated changes.
Jobs, Standards, and Social Impact
- Some fear massive cost savings will reduce engineering headcount and wages, and allow weak developers or non-developers to flood the field with low-quality software.
- Others argue good engineering still requires judgment, empathy, and design skills that tools don’t replace.
- There is broad agreement that LLMs are here to stay; disagreement is about whether they mostly help, hurt, or just shift which skills matter.