AI makes the humanities more important, but also weirder

AI and Academic Assessment

  • Many see LLMs as blowing a “gaping hole” in current education, which has treated unsupervised written work as evidence of learning; AI can now produce that work.
  • Suggested responses: in-class / oral exams, closed-book tests, podcast or project-based work, oral defenses, and “German-style” systems where hard problem sets gatekeep high‑stakes exams.
  • Others note big practical barriers: heavy teaching loads (e.g., 4–5 courses/term), slow iteration (1–2 tries/year), lack of institutional support, and students who flounder with self-directed or “design your own pedagogy” models.
  • Some argue AI use should simply be allowed and the bar raised, since AI output is only as good as the user; others propose device bans or test centers, but enforcement outside exams is seen as unrealistic.

Accessibility, Fairness, and Assessment Design

  • “AI-proof” or multimodal assignments (e.g., recognizing island outlines) raise disability concerns, especially for blind or visually impaired students.
  • Debate splits between:
    • “Different people can get different assignments; that’s fine.”
    • Versus: separate tracks stigmatize and are poorly maintained; assignments should be designed inclusively from the outset.
  • Proposals include multifaceted tasks (essay, podcast, video, comic, etc.) focusing on core learning goals, but critics note difficulty in keeping alternatives equivalent and objectively graded.

Humanities, History, and the Value Question

  • Several commenters agree AI forces educators to revisit “What does it mean to learn?” and “What is the humanities for?” beyond credentialing.
  • Disagreement over history’s purpose:
    • One camp: primarily to understand human stories, complexity, and perspectives, not to predict the future.
    • Another: history should be used more as strategic analysis (e.g., studying losers, failures, instability).
  • Some argue humanities are already treated as credential mills and “history appreciation,” not deep engagement; AI may worsen shallow, AI-written essays unless teaching shifts toward discussion, recitation, and Socratic methods.

AI as Tool: Coding, Translation, and Research

  • View that commoditized coding will empower humanists who can now build tools, analyze texts, or visualize data with AI help.
  • Skeptics warn about hallucinated libraries, citations, and black‑box fragility; AI helps those who already understand software but can mislead novices.
  • Strong disagreement on AI’s translation quality: some say modern transformers or specialized tools outperform LLMs; others claim generic LLMs still hallucinate and silently distort meaning, dangerous for serious scholarship.

Broader Systemic and Cultural Concerns

  • Many see AI cheating as symptom, not cause, of an education system optimized for grades, credentials, and social sorting rather than learning.
  • Discussion touches on collective-action problems (“everyone else will cheat”), economic incentives, and hollowing out of mid‑skill jobs.
  • Some worry LLMs will normalize “vibes over truth,” erode notions of objectivity, and even reshape how the next generation writes, thinks, and speaks.