The Singularity will occur on a Tuesday

Reactions to the piece

  • Many readers found it “delightfully unhinged”: a long, faux-rigorous build‑up to the punchline that the real curve is in human belief and behavior, not AI capability.
  • Others thought it read like “AI slop” or a ChatGPT/Claude session, citing clichés (“Here’s the thing nobody tells you…”, “Not a bug. The feature.”) and overconfident curve‑fitting as tells.
  • Several emphasized it’s satire or semi‑satire: the math is knowingly dodgy, and the point only lands if you read past the graphs.

Growth curves, modeling, and timelines

  • Multiple commenters object to fitting hyperbolas at all: they’re picked specifically because they blow up, not because the data demand them. For most series a straight line or sigmoid/logistic is as plausible.
  • People point out physical and economic limits: compute, energy, fabrication, needing real‑world experiments, and the historical pattern of S‑curves rather than true exponentials.
  • Others note that key metrics like MMLU and tokens‑per‑dollar look roughly linear; the only clearly “superlinear” thing is the volume of “emergent” papers and AI hype.

What the “singularity” really is

  • A central thread accepts the article’s reframing: the important singularity is when humans can no longer make coherent collective decisions about machines, not when models hit some capability threshold.
  • Several argue we’re already partway there: institutions respond much more slowly than tech; companies race ahead of regulation; belief in inevitable AI progress drives behavior regardless of reality.
  • Others push back that this social “singularity” is just another bubble or millenarian narrative, structurally similar to religious apocalypses.

Capabilities and limits of LLMs

  • Long subthreads debate whether “next token prediction” fully explains LLM behavior. Some say we understand the mechanics (gradient descent, tensors); others stress that we don’t understand the learned internal algorithms or representations.
  • There’s disagreement on whether scaling LLMs alone can yield AGI or qualitatively new ideas, versus just ever‑better remixing of human knowledge.
  • Several note missing pieces: memory, continual learning, agentic structure, and the need for real‑world experimentation, especially for science.

Labor, economics, and power

  • Many see AI layoff narratives as anticipatory and PR‑driven: “we’re cutting because of AI” plays better than “we’re cutting for margin.”
  • Strong concern that AI will primarily depress wages, erode bargaining power, and widen inequality rather than liberate people from work.
  • Others argue the real problem is ownership and incentives, not “thinking machines” themselves: absent social reform, tech amplifies existing power structures.

Data, poisoning, and information dynamics

  • Some advocate “poisoning” web data to degrade future models; critics respond this mainly raises the cost of clean data and advantages large players.
  • A recurring theme is “epistemic takeover”: once enough elites believe a singularity is inevitable, their coordinated actions can make some version of it socially real, even if the underlying tech is just incrementally improving.