The Math Is Haunted
Lean tactics, readability, and UX
- Several commenters find Lean tactics (e.g.
rfl,rw) overloaded, opaque, and dependent on distant context, making proofs feel “fuzzy” compared to low-level languages. - Others note you can inspect tactic definitions, use pattern-matching / term-style proofs instead of tactics, and rely on indentation conventions to show proof structure, but acknowledge a learning cliff.
- There’s active work on a more explicit
rwinterface (position parameters, GUI widgets) and better error messages, plus external visualization tools like Paperproof and Lean→LaTeX exports. - Some worry that reading proofs without interactive tooling is hard; others say they scan for structure and lemmas rather than line-by-line states.
Comparisons: Lean, Coq/Rocq, Agda, Metamath, Mizar
- Agda is preferred by some for direct proof terms and strong dependent pattern matching; Lean is seen as a middle ground, Coq/Rocq as more annotation-heavy.
- Metamath and Metamath Zero are praised for minimal cores but require memorizing many lemmas; Lean’s tactic layer trades simplicity for usability.
- Mizar is mentioned as an older system with a large library; Lean’s mathlib is perceived as having more momentum currently.
Kernel, tactics, “sorry”, and verification
- Tactics are user-level; only the small kernel must be trusted. This can cause proofs to break across versions when tactic behavior changes (e.g.
simp), so people pin down theorem lists (simp only [...]). - Lean produces binary “olean” objects containing fully elaborated terms that can be independently rechecked. Commands like
#print axiomsand extra tools (e.g. SafeVerify) help ensure no hidden axioms orsorryare used.
Formal statement vs intended meaning
- Multiple comments stress that proof assistants only guarantee “if this formal statement is what you meant, the proof is correct”; they don’t ensure the statement matches human intent.
- This is likened to the spec/implementation gap in software; there’s skepticism that any purely technical solution exists, though LLMs and computational semantics are floated as partial aids.
Applications beyond pure math
- People speculate about Lean‑like systems for fact-checking news, mapping argument trees, or Bayesian reasoning; others respond that natural language ambiguity, rhetoric, and evidential reasoning differ fundamentally from mathematical proof.
- Various argument‑mapping and debate tools (Kialo, Argdown, Claimify, older systems) are cited as lighter-weight attempts in this direction.
Foundations, axioms, and creativity
- Discussion touches on Euclid, ZFC, controversial axioms (e.g. choice), and the fact that axioms need not be “cosmically true,” just assumed within a theory.
- Some fear formal systems might constrain mathematical creativity; others counter that Lean lets you add arbitrary axioms and explore consequences, including intentionally “haunted” inconsistent ones.
- There’s brief mention of localized inconsistency and paraconsistent logics as a research topic.
Gamification and learning
- Several proof‑puzzle games and teaching resources (Natural Number Game, other proof games) are shared; some enjoy the “logic game” feel, others find early Lean proofs opaque or mis-termed (e.g. “goals”).
- Opinions differ on whether Lean is currently good for learning advanced math; some say pencil-and-paper is still better, though formalization projects (e.g. analysis companions) are used for study.
AI and the future of formal methods
- Commenters see strong synergy between LLMs and proof assistants, but note current LLM performance in Lean/Coq is mediocre and their scripts are painful to debug.
- Research efforts (e.g. DeepSeek-Prover, SMT integration, tools from major labs, ongoing formalization of Fermat’s Last Theorem) are cited as promising, with a sense that AI may significantly reshape the landscape.
- For now, Lean and Coq/Rocq are viewed as the safest bets in terms of community, libraries, and real-world use; Idris and Agda are seen as smaller, more niche.