Peano arithmetic is enough, because Peano arithmetic encodes computation

Reaction to the post and Lisp bootstrapping

  • Many readers enjoyed the exposition and humor, especially the explicit Lisp bootstrapping inside Peano Arithmetic (PA) and the Goodstein discussion.
  • Several links are shared to tiny/self-bootstrapping Lisps (e.g., SectorLISP) and related systems (Forths, ZenLisp, Boyer–Moore/NQTHM), reinforcing the idea that very small cores can encode rich computation.
  • Some praise Lisp and Forth as especially natural for “bootstrapping from nothing,” with Lisp often perceived as easier to reason about.

Goodstein sequences, PA strength, and consistency principles

  • A cluster of comments digs into why Goodstein’s theorem is unprovable in PA, despite each individual Goodstein sequence being provably terminating in PA.
  • Discussion centers on:
    • PA + “Con(PA)” not being sufficient to prove Goodstein’s theorem.
    • The need for stronger principles like ω‑consistency or uniform reflection (“if PA proves φ, then φ is true” for all arithmetic φ).
    • How to formalize ω‑consistency as an arithmetic sentence using Gödel numbering.
  • There is careful distinction between:
    • ∀n Provable(G(n)) versus Provable(∀n G(n)), and why you cannot generally move “Provable” outside a universal quantifier.
    • “PA proves X” vs “PA proves that PA proves X”; the latter can hold in nonstandard models without yielding a standard proof of X.

Nonstandard models and meta-theoretic vs internal reasoning

  • Several comments emphasize that PA can’t internally distinguish standard from nonstandard naturals.
  • This underlies why PA cannot safely infer “if PA proves it proves S, then it proves S” in all cases, even though meta-theoretically (from outside) one can often argue such implications for standard instances.

Foundations and alternatives: PA, inductive types, lambda calculus

  • Some discuss whether natural numbers or inductive types are more “primitive,” noting that inductive types can be built from ℕ plus a few type formers, but that some infinite source (ℕ, W-types, inductives) is required.
  • Others note that pure lambda calculus and other minimalist systems (e.g., Boyer–Moore’s logic) also suffice to encode computation and build significant mathematics.

Reals, computability, and physics

  • A long subthread debates whether “computation is enough” given that:
    • Computable reals are only a subset of reals.
    • Physical measurement always has finite precision and may never access incomputable structure.
  • One side stresses that science successfully uses ZFC and full reals; another counters that for physical purposes discrete or computable approximations seem sufficient, and the continuum may be an idealization.
  • There is extended argument about:
    • Whether physical quantities must form Archimedean groups modeled by ℝ.
    • The impact (or irrelevance) of the Heisenberg uncertainty principle for measurement theory.
    • The practical role of numerical analysis: continuous models optimized first, then discretized.

Speculative “discrete–continuum bridges” and LLM-written math

  • A highly abstract comment sketches a unification of discrete and continuous mathematics via Grothendieck topoi, Betti/aleph/beth numbers, Noetherian symmetries, etc.
  • Others critique this as mathematically incoherent or “buzzword salad,” especially the aleph/beth/Betti conflation; yet concede some high-level ideas (e.g., discrete sampling of continua) are reasonable.
  • It is revealed that some of the linked material was largely generated via LLMs guided by a human with a brain injury using them as a “thought transcriber.”
  • This triggers a meta-discussion: current LLMs can manipulate mathematical language but still lack deep understanding; future systems might do better, but skepticism remains about present-day high-level math output.

Pedagogy, coincidences, and community/platform quirks

  • Brief side discussion on teaching set theory vs arithmetic first (echoes of 1960s “New Math”).
  • Several people remark on “coincidences” where topics they just researched appear on HN; others attribute this to recency illusion and common underlying causes, not tracking.
  • Tangent on ads, surveillance, and tools like AdNauseam appears early, then some argue against derailing every tech discussion into ad pessimism.
  • StackOverflow/HN dynamics are noted: SO’s reputation thresholds and deletion culture likely suppress upvotes on the original answer compared with HN attention.

Corrections, style, and minor points

  • Readers point out minor errors in the post (typos like “omega” vs “ω”, a missing parenthesis in a Lisp not example); author acknowledges and fixes them.
  • Some comments discuss readability of heavily parenthesized Lisp vs indentation-based syntaxes like Python, and how proper formatting makes Lisp manageable.
  • A few links are shared for further reading on PA encodings, Kirby–Paris/Goodstein, PA consistency notions, and popular logic/complexity expositions.