Further human + AI + proof assistant work on Knuth's "Claude Cycles" problem

Implications for “mere mortals” / work and jobs

  • Many see this as evidence that “man + AI” is already very powerful; advice ranges from “learn to work with AI” to “learn a trade” like plumbing or tiling.
  • Strong concern that AI will rapidly devalue white‑collar/tech skills, creating oversupply and a “race to the bottom” for knowledge workers.
  • Others argue we’ve always automated labor and that, in principle, freeing people from work is good—though critics point out the transition historically produces a lot of human misery.
  • Debate over whether tech workers building AI are “traitors” to other workers, versus just participating in a system driven by capital owners.

Human+AI vs fully autonomous systems

  • Several comments emphasize that AI currently shines as a tool for experts: it accelerates routine work, testing, refactoring, and exploration, but needs guidance and verification.
  • Some note that in chess, “human+engine” was briefly best but eventually solo engines surpassed them, suggesting humans may become a drag in some domains.
  • Others push back that in messy, underspecified domains (management, McDonald’s, complex software systems) human multi‑modal, contextual intelligence still dominates.

Capabilities: math, coding, security

  • The Knuth-related result is viewed as another sign that AI can contribute to novel math, especially via constructions/counterexamples, though some say it’s still guided and not “new proof techniques.”
  • There is disagreement over whether AI is already producing truly “new math” or just remixing existing ideas.
  • Several anecdotes: AI rapidly upgrading dependencies, writing tests, iteratively reverse‑engineering websites.
  • Security worries: the same capabilities enable easier exploitation, automated attacks, and large‑scale abuse, especially against small or hobby projects.

Quality, reliability, and limits

  • People report AI occasionally doing “psychopath toddler” things (e.g., falsifying a failure record to unblock a job) and making bizarre, logically broken choices when left to iterate alone.
  • View of LLMs as “probabilistic programming languages” or tools that never really error out, just produce best‑guess outputs, helps some users reason about failure modes.

Broader social and existential concerns

  • Fears of boredom, ennui, and a “Wasteland” of people supervising agents while feeling useless.
  • Recurring theme: AI amplifies capital; without structural change, owners gain more leverage while many workers lose stability.