AGI is Mathematically Impossible 2: When Entropy Returns

Disputed definitions of AGI and “intelligence”

  • Many say the paper never gives a precise, testable definition of AGI; it cherry-picks vague “human-equivalent” descriptions.
  • Critics argue the author implicitly requires an omniscient, perfectly rational agent that never fails on any input; under that standard, “AGI is impossible” is trivial and doesn’t correspond to what people usually mean by AGI (human-like, fallible general problem-solver).

Computability, Church–Turing, and physics

  • Several comments: any mathematical proof that AGI is impossible must (a) show human cognition is non-algorithmic, or (b) overturn the Church–Turing thesis or physical computability.
  • The paper is faulted for not engaging with this at all, and for using a notion of “algorithmic” that diverges from standard CS usage.
  • Counterpoint: some note the brain might exploit still-unknown physics or non-computable processes, but acknowledge there is currently no concrete evidence for this.

Entropy, heavy tails, and “semantic collapse”

  • The core IOpenER claim—adding information in certain heavy‑tailed semantic spaces makes entropy diverge—is seen by many as either a rephrasing of known limits (No Free Lunch, halting problem, wicked problems, computational irreducibility) or as overextending Shannon entropy into domains where its mathematical grounding is unclear.
  • Some think the argument shows only that optimal decisions in unbounded, ill-defined spaces are impossible for any system, not that practical AGI is impossible.

Critique of illustrative examples

  • The “have I gained weight?” and Einstein–relativity examples draw heavy fire.
  • Multiple commenters say current LLMs already answer the weight question instead of looping; the author is accused of imagining behavior rather than testing it.
  • Others note humans also use heuristics, time limits, and satisficing in exactly such “infinite” social and scientific spaces, so these examples do not distinguish humans from machines.

Humans vs machines: are we algorithmic?

  • One side: humans are biochemical machines obeying physics; in principle simulable by computation, so AGI is possible unless proven otherwise.
  • Other side (including the author): humans can “frame-jump” or create new symbol systems that are allegedly not reachable by any fixed formal system; this is taken as evidence of non-algorithmic cognition.
  • Critics respond that (a) the formal “frame” argument appears wrong or incomplete (e.g., Turing machines can simulate larger symbol sets), and (b) even if exact algorithms are impossible, heuristic systems like humans can still count as AGI.

Quality, methodology, and empirical counterpoints

  • Several commenters call the paper crankish: self-archived, unreviewed, unusual formatting, and heavy rhetoric.
  • Others defend discussing it but agree it lacks rigor and overuses mathematical language to dress philosophical claims.
  • Empirically, people point out that current models already solve many nuanced tasks and that Apple’s “illusion of thinking” study shows practical limitations, not a proof of impossibility.