Trying to teach in the age of the AI homework machine

AI, Homework, and Assessment

  • Many see graded take‑home work as untenable: LLMs can do most essays and coding assignments, making homework a poor proxy for understanding.
  • Common proposed fix: shift weight to in‑person, proctored assessment—handwritten exams, lab tests on air‑gapped machines, oral exams, in‑class essays, code walk‑throughs, and project defenses.
  • Objections: this is more expensive, harder to scale, and clashes with administrative pushes for “remote‑friendly” uniform courses and online enrollment revenue.
  • Some instructors respond by massively scaling assignment scope assuming AI use; critics say this effectively punishes honest students and those unable or unwilling to use AI.

Is AI Use Cheating or a Job Skill?

  • One camp treats AI as a natural tool like a calculator or IDE: let it handle boilerplate, glue code, proofreading, and use freed time for higher‑level skills.
  • Others argue that if AI is required to keep up, non‑users are disadvantaged, and students can pass without building foundational competence.
  • Suggested middle ground: allow AI for practice and exploration but verify mastery in AI‑free settings; or use AI as a “coach” (e.g., critique a student’s handwritten draft) rather than a ghostwriter.

Re‑thinking Homework and Grading

  • Many commenters say homework should be mostly ungraded or low‑weight, serving as practice plus feedback rather than evaluation.
  • Others note that graded homework exists largely to coerce practice; when AI completes it, students still fail exams but expect make‑ups.
  • Variants proposed: nonlinear grading (final = max(exam, blended exam+HW)), frequent low‑stakes quizzes, large non‑AI‑solvable projects, or flipped classrooms where practice happens in class and “lectures” happen at home.

Value of Degrees and Institutional Incentives

  • Some predict widespread AI cheating will make many degrees indistinguishable from degree‑mill credentials; others think “known‑rigorous” institutions that lean on in‑person testing will become more valuable.
  • Multiple threads blame the consumer/for‑profit model: funding tied to graduation counts, online enrollment as a “cash cow,” grade inflation, and admin‑driven constraints (e.g., banning in‑person exams for “fairness” to online sections).
  • Several teachers report AI has exposed pre‑existing problems: weak motivation, cheating cultures, and an overemphasis on grades and credentials over actual learning.

AI as a Tutor vs. AI as a Crutch

  • Individually, many describe LLMs as transformative for self‑study (math, CS, Rust, etc.), especially for motivated learners and adults without access to good teaching.
  • The tension: AI can be an extraordinary personal tutor, but in credential‑driven systems students are heavily incentivized to use it as a shortcut, hollowing out the meaning of coursework unless assessment is redesigned.