Giving university exams in the age of chatbots

Course and Exam Setup

  • The course (Open Source Strategies) emphasizes collaboration; even during exams, students may discuss on-topic questions with each other.
  • The professor allowed LLMs, but made students explicitly accountable for their use: chatbot-originated mistakes were penalized more heavily than “honest” human misunderstandings.
  • Some see this as ingenious: with a powerful tool available, failure to vet or understand its output indicates weaker mastery than a solo mistake.
  • Others argue this biases students against LLMs relative to web search, since similar double standards aren’t applied to information from websites.

Student Use and Trust of LLMs

  • Very few students actually used LLMs in the described exam; some did so well, some poorly (e.g., walls of text, obvious misunderstandings of chatbot prose).
  • Commenters debate how generalizable this is: in some institutions, LLM dependency is said to be “exploding,” especially among younger cohorts.
  • Several predict future dependence on tools, with concern that generations may become unable to work without AI assistance.

Memorization, Understanding, and Exam Design

  • A large subthread contrasts “traditional” closed-book, handwritten, often oral exams versus tool-enabled, open-book/LLM exams.
  • One camp advocates going back to strict, in-person, device-free, heavily memorization-based exams (sometimes with oral components), claiming memorization underpins creativity and expertise.
  • Others counter that this privileges rote memory and performance under pressure, penalizing students with anxiety or different cognitive styles; they favor projects, portfolios, and open-book exams that test synthesis and reasoning.

Cheating, Collaboration, and Academic Culture

  • The professor was surprised that students fear even discussing past exam questions; in some systems such collaboration was once normal and even encouraged.
  • Many describe harsh, zero-tolerance cheating regimes, pressure from fee-paying models, and widespread plagiarism (sometimes tolerated, sometimes punished).
  • Some argue AI mainly amplifies existing incentives: if education is a “degree factory,” students will use LLMs to just pass; if the culture values deep learning, students use them more critically.

Fairness, Access, and Future of AI in Education

  • Concerns include over-reliance on proprietary LLMs that might become expensive or restricted, versus optimism about future cheap/local models.
  • There is debate whether students should be trained as independent thinkers first and LLM users second, or treated from the outset as workers who will always have AI tools.