How AI assistance impacts the formation of coding skills
Study findings and what they actually say
- Several commenters note the paper is often misrepresented. The study shows:
- Using GPT‑4o to learn a new async Python library (Trio) reduced conceptual understanding, code reading, and debugging ability.
- Average task time was only slightly faster with AI and not statistically significant.
- Full delegation to AI improved speed somewhat but severely hurt learning of the library.
- Some point out the abstract’s reference to “productivity gains across domains” is citing prior work, not this experiment.
Productivity gains vs. erosion of skills
- Many see a clear tradeoff: faster completion (especially for juniors) at the expense of deep understanding and debugging skills.
- Others argue this is analogous to calculators or compilers: some skills naturally atrophy when tools arrive, and perhaps that’s acceptable.
- Concern: if juniors grow up “supervising” AI without ever building fundamentals, future teams may lack people capable of debugging or validating AI‑written code, especially in safety‑critical domains.
Patterns of AI use: tutor vs. crutch
- The paper’s breakdown of interaction patterns resonated:
- Using AI to explain concepts, answer “why” questions, and clarify docs tended to preserve learning.
- Using it mainly for code generation or iterative AI‑driven debugging correlated with poor quiz scores.
- Several experienced developers say they learn faster by using AI as an on‑demand mentor or doc navigator, not as an autonomous coder.
Code quality, testing, and comprehension
- Strong debate over “functional competence vs. understanding”:
- One side: correctness can be grounded in tests, differential testing, and high‑level complexity awareness; deep implementation understanding is optional.
- Other side: tests miss unknown edge cases; reading and understanding code is crucial for discovering hidden assumptions and for debugging real failures.
- Multiple people report AI‑written code feels alien even when they reviewed it; returning later, they understand self‑written code far better.
Career development and the nature of software work
- Repeated theme: programming is continuous learning, not something juniors finish early.
- Fear that “AI‑native” juniors will ship features quickly but never develop architecture, debugging, and systems thinking—exacerbated by management focusing solely on short‑term velocity.
Centralization, reliability, and motives
- Worries about dependence on cloud AI (outages, pricing power, enshittification, privacy). Local models are seen as a partial answer.
- Anthropic gets both praise for publishing negative results and skepticism about small sample size, arXiv‑only status, and possible PR/“safety” positioning.