Inside the university AI cheating crisis
Assessment formats and AI use
- Many courses rely mostly on papers, projects, and presentations rather than proctored exams; some universities reduced or removed exams during Covid and never restored them.
- Others report traditional models with midterms, finals, in‑class essays, language interviews, and problem‑solving exams still dominant.
- Proposed countermeasures: handwritten in‑class essays, paper‑and‑pencil or air‑gapped lab exams, oral exams/interviews, orals at scale using AI assistance, and weighting exams more heavily to offset easy homework cheating.
- Major constraint: time and labor for oral or closely proctored assessment, especially with large classes and limited TA support.
What counts as “cheating” with AI
- Described spectrum: brainstorming topics, outlining, polishing prose, full drafting, paraphrasing tools, grammar checking, or using AI to explain readings.
- Humanities educators in the thread tend to see AI‑assisted writing as cheating; some science/technical educators are more open if the ideas and analysis are original.
- Participants highlight a large unresolved gray area and call for clearer definitions (e.g., AI‑generated then edited vs. human‑written then AI‑edited).
- One suggestion: require students to submit prompts as part of grading to expose how AI was used.
Detection tools and their limits
- Turnitin plagiarism detection is variously described as:
- Expensive with many false positives and reliance on crude similarity metrics.
- Still useful for catching blatant copying and paraphrasing.
- AI‑detection is widely viewed as unreliable “snake oil,” with concerns about:
- High false‑positive rates (including non‑native speakers).
- Lack of independent validation of accuracy.
- Arms‑race dynamics as prompts/styles change.
- Newer tools that record the writing process (keystrokes, edits) are described; they may work now but raise evasion concerns, privacy/FERPA issues, and face adoption barriers (apathy, red tape, cynicism).
Learning, incentives, and ethics
- Some students use AI to save time or clarify material; others may bypass learning entirely.
- Debate over analogy to calculators: baseline skills are seen by some as essential to later understanding and to critiquing AI output; others argue much hand‑work is unnecessary busywork.
- Several comments criticize higher ed for emphasizing credentials, curves, and high‑stakes grading, making AI a rational way to “game” a zero‑sum system.
- Others stress personal integrity and long‑term self‑harm from cheating, while noting broader cultural distrust of institutions and role models who succeed via dishonesty.
Future of essays and assignments
- Some argue if AI can do an assignment well, the assignment design is obsolete; call to move away from formulaic essays toward presentations, more authentic tasks, or different communication forms.
- Others defend essays as a core way to develop thinking and writing, noting essay‑like writing is common outside academia (editorials, blogs, long posts).