How University Students Use Claude
Foundational Skills vs. “Cheating with AI”
- Many comments argue the report understates how often Claude/LLMs are used to bypass learning rather than “collaborate.”
- Numerous anecdotes from CS and math courses: students paste assignments and error messages into ChatGPT, turn in code they can’t explain, and hit a “wall” in advanced courses because they never learned basics (structs, pointers, data types, etc.).
- Others see value in AI as a tutor/“spotter” for debugging or hints, but stress it must come after genuine struggle; otherwise students lose the “productive struggle” that builds deep understanding.
- There is concern about a generation that can only operate via prompt engineering, with little internalized knowledge or problem‑solving ability.
Universities, Assessment, and Adaptation
- Several argue universities should embrace AI as inevitable but radically change assessment: more in‑person, proctored, or handwritten exams; oral exams and tutorials; open‑book/AI‑allowed exams designed so only those who truly understand can pass.
- Others propose:
- Homework largely ungraded or purely formative.
- Grades based mainly on supervised exams and live performance.
- Dual tracks: some tasks explicitly no‑AI, some explicitly AI‑enhanced and graded differently.
- There is pushback against “watering down” standards: degrees only remain meaningful if institutions actually ensure graduates can do the work without crutches.
Productivity, Cognition, and Long‑Term Effects
- Strong disagreement on whether LLMs increase productivity: some programmers report dramatic speedups; others say review, debugging, and lost understanding outweigh any gains.
- Broader worry that offloading thinking (not just lookup or arithmetic) will erode critical reasoning, memory, and metacognition—analogies to calculators, GPS, spell check, and search, but with higher stakes because LLMs sit closer to “thinking” itself.
- Counter‑argument: tools have always shifted which skills matter; perhaps humans should focus on higher‑level reasoning while AI handles routine parts—though it’s unclear whether that actually happens in practice.
Anthropic’s Report and Incentives
- Multiple commenters see the report as “AI‑washing”: category labels like “create and improve educational content” or “provide solutions for assignments” could easily hide large‑scale cheating.
- Noted conflict of interest: Anthropic both sells into universities and frames usage as largely benign; it has incentives to under‑report direct essay‑writing, test‑answering, and plagiarism.
- Methodological concern: using an LLM to classify millions of student chats imports LLM unreliability into the analysis, making fine‑grained distinctions (e.g., practice vs. cheating) dubious.