Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete
What “obsolete” might mean
- Some interpret “obsolete in 5 years” trivially (GPT‑4 replaced by GPT‑5), others as architectural replacement (e.g., JEPA‑style models superseding pure language modeling).
- A different view: current approach becomes obsolete economically, as scaling is roughly linear in cost while expectations (and VC funding) assumed superlinear returns.
LeCun’s critique of autoregressive LLMs
- Discussion centers on his claim that token‑by‑token generation is “System 1” (fast, reactive) with no real “System 2” reasoning.
- Others note there are non‑autoregressive models that still don’t show qualitatively better reasoning, so “one token at a time” may not be the real bottleneck.
- JEPA/V‑JEPA is cited as his long‑term bet, though commenters note there’s little yet to show versus state‑of‑the‑art LLMs.
Math, logic, and modeling reasoning
- One camp argues no form of mathematics can fully model conceptual reasoning; math is just one tool of thought.
- Opponents insist reasoning is ultimately formalizable; alternative logics (e.g., paraconsistent logic) and probabilistic models could capture messy human preferences and inference.
- There’s back‑and‑forth over whether LLM behavior is “just probabilities” or something richer happening in latent representations.
Pattern matching as (or vs) intelligence
- Several argue that intelligence is largely sophisticated pattern‑matching: abstraction, compression, and recombination of patterns; logic is comparatively easy.
- Others push back: questions like dark matter, cancer cures, or world peace seem to demand more than pattern matching over existing data.
- Creativity is debated: is it just applying familiar patterns in new domains, or something fundamentally beyond interpolation? No consensus.
Real‑world value of LLMs
- One side: outside text generation and search, LLMs haven’t delivered major value; visible software quality and velocity don’t seem transformed.
- Counterpoint: text and search already underpin trillions in economic activity; coding assistance, prototyping, and problem exploration are concrete productivity gains for many.
- Some see LLM interaction as “just search,” others liken that objection to nitpicking whether airplanes truly “fly.”
LeCun’s track record and lab context
- A long subthread criticizes his moving goalposts: each time LLMs achieve something he previously claimed they couldn’t, he redefines what “matters” without revising core beliefs.
- Others defend updating views as normal and argue his core claim—LLMs alone won’t reach AGI—hasn’t been falsified.
- There’s meta‑skepticism toward all public predictions (both hype and dismissal) and calls to focus on actual research (e.g., JEPA) rather than personalities.