Yann LeCun raises $1B to build AI that understands the physical world
World models vs. LLMs
- Many commenters welcome a major, well‑funded push on “world models” as a needed alternative to language‑centric AI.
- Pro‑world‑model arguments:
- Text is an after‑the‑fact, compressed residue of reality; models should learn directly from high‑dimensional spatiotemporal data and causality.
- Physical competence (e.g., a cat’s navigation and manipulation) is still far beyond current models; this is framed as the real bottleneck to AGI.
- World‑model approaches (e.g., JEPA‑style predictive representations) aim to model the underlying dynamics, not just reproduce pixels/tokens.
- Skeptics argue that multimodal transformers already can, in principle, act as world models, since “tokens can represent anything,” and that “world models” is more branding than substance.
Limits and strengths of current LLMs
- Critics say autoregressive LLMs:
- Are structurally tied to copying and remixing training data, not genuine novel discovery.
- Lack grounding, so hallucinations are inevitable.
- Struggle with robust causal reasoning, out‑of‑distribution generalization, and physical interaction.
- Others counter:
- RL‑enhanced models now solve substantial tasks, write large codebases, recover from their own errors, and act as capable agents.
- Human reasoning is also largely pattern‑based; the gap may be smaller than critics admit.
Learning, architecture, and “AGI path” debates
- Some see the key bottleneck as continual/online learning and new learning rules beyond backprop; others emphasize richer environments and data over new architectures.
- There is disagreement on whether architectural shifts (like JEPA/world models) are essential, or whether scale + better data + RL on existing transformers can reach “AGI.”
- Several note biological brains are far more data‑ and energy‑efficient than current models, implying today’s approach is fundamentally wasteful.
Economic, social, and geopolitical angles
- Opinions split on whether advanced AI will broadly benefit humanity or mostly empower existing elites and justify layoffs.
- The huge seed round is seen as:
- Positive for Europe (and also Singapore/Canada) in attracting frontier AI labs and talent.
- Yet also an example of capital concentration that could have funded many smaller startups.
- Some argue Europe needs such labs to stay competitive; others point out US and Chinese funding and talent still dominate.
Prospects and skepticism about the new lab
- Enthusiasts see this as a chance to recreate a Bell‑Labs‑style, blue‑sky research culture and push beyond LLM plateaus.
- Skeptics question:
- Why, with greater resources inside a big tech company, similar ideas did not already yield breakthrough products.
- Whether world‑model approaches are still mostly theoretical while LLMs are already delivering value.
- Many conclude that even if the specific bet fails, diversifying AI research beyond LLMs is beneficial.