Student perceptions of AI coding assistants in learning
Scope and rigor of the study
- Several commenters note the study’s very small sample (N=20) and see its findings as unsurprising: AI helps early confidence and implementation but leaves gaps when assistance is removed.
- Some argue the qualitative insights (how students actually use tools) are more valuable than the quantitative claims; others want much larger, more rigorous replications.
Learning, memorization, and syntax
- Debate over whether schools overvalue memorizing syntax vs deeper concepts, abstractions, and readability.
- Some contend you must first master basics to build higher-level skills; others stress that “learning” means generalization, not mere regurgitation.
- There’s concern that AI can create an illusion of understanding when students have not “earned” the knowledge through practice.
AI coding assistants vs calculators and other tools
- Repeated analogies to calculators, typewriters, Google, and high-level languages.
- Key distinction drawn: calculators and compilers are deterministic and logically sound; LLMs are probabilistic, can hallucinate, and outputs are hard to debug.
- Others counter that tools can still be transformative and widely adopted even if they require careful use and produce errors when misused.
Impact on assignments and curricula
- Some argue the particular OOP assignment in the paper is contrived, designed to force in inheritance rather than teach real-world design; in such artificial tasks, AI naturally looks less helpful.
- This is framed as a critique of curriculum design more than of AI’s learning value.
Cheating, grading, and credential erosion
- A long subthread describes how LLMs have “broken the curve”: cheating is easy, online/homework scores are inflated, and diligent students struggle to compete.
- Professors sometimes acknowledge suspected cheaters yet don’t adjust curves or enforce rules, prompting frustration.
- Others note that high exam scores from students in the back row are not always cheating; some simply learn outside lecture.
- Several predict universities and employers will devalue GPAs and rely more on direct assessments and longer, in-house evaluations.
Future of programming and “AI-native” skills
- Some predict that, as AI improves, learning to code “by hand” will become niche, akin to doing integrals manually.
- Critics argue that without a deep mental model (the “tower of knowledge”), students will be unable to handle hard problems or understand/verify AI-generated code.
- There’s speculation about a new divide between skilled AI users who use tools to think better and unskilled users who outsource thinking, with major implications for education and hiring.