The cultural divide between mathematics and AI

What mathematicians value vs. what AI optimizes for

  • Many comments echo the article’s point: mathematicians care primarily about why a theorem is true, not just whether it is.
  • AI and much of ML research are seen as oriented toward “what works” (benchmarks, products, novelty) rather than deep conceptual understanding.
  • Some note that this isn’t unique to AI but reflects a broader “engineering / business” mindset: optimize, ship, and monetize.

Proof, understanding, and computer/AI-generated results

  • The Four Color Theorem and Kepler conjecture are used as examples: computer-heavy proofs settled truth but left many unsatisfied about underlying structure.
  • Debate: is “there exists a finite unavoidable reducible set of configurations” already a genuine why, or just a restatement with no real insight?
  • Several argue that proofs which are too long or opaque to be grasped by humans are of limited mathematical value: they don’t generalize, inspire new techniques, or clarify which assumptions matter.
  • Others respond that long, ugly, or “incomprehensible” proofs still have use as tools, and that understanding can come later by analyzing the proof or its consequences.

AI as tool, collaborator, or replacement

  • Optimistic view: AI can handle tedious but non-trivial “busywork” (extensions of inequalities, error bounds, formalization in Lean), freeing humans for big-picture ideas.
  • Some envision AI-guided “recreational” or hobbyist-level research and powerful personal tutors that compress months of reading into days.
  • Pessimistic view: if AI eventually produces both formal proofs and beautiful explanations, human research may be economically displaced and current mathematical communities may shrink or lose their role.
  • Analogies to CNC vs. artisanal woodworking: tools expand capability but also change who gets to be a professional and how large the human community remains.

Openness, secrecy, and “AI-washing”

  • Strong discomfort with increasing secrecy in industrial AI labs, contrasted with mathematics’ tradition of open sharing and alphabetical authorship.
  • Some frame the divide as economic: AI is sliding from academic research into proprietary engineering; conferences and talks feel pressured to bolt on AI themes to attract funding and attention.

Interpretability and rigor gaps in AI

  • Frustration that many ML papers contain “mathiness”: dense but wrong, irrelevant, or uncheckable mathematics.
  • Calls for more focus on understanding models (mechanistic interpretability) rather than just scaling, though others stress how difficult this is in practice.