Chomsky on what ChatGPT is good for (2023)
Context and Meta-Discussion
- Interview is from 2023; several commenters note how fast LLMs have advanced since, and also Chomsky’s age and health, questioning how much weight to give very recent remarks.
- Some find his prose increasingly hard to follow; others say he’s still unusually clear and precise compared to most academics.
Chomsky’s Main Position (As Interpreted)
- He distinguishes engineering (building useful systems) from science (understanding mechanisms).
- LLMs are seen as engineering successes but, in his view, tell us little about human language or cognition.
- His navigation analogies (airline pilots vs insects; GPS vs Polynesian wayfinding) are read as: good performance does not equal scientific understanding of the biological system.
Understanding, Language, and “Imitation”
- One camp agrees with Chomsky that LLMs mostly imitate surface statistics, lack “understanding,” and are poor models of human cognition. They point to hallucinations, data hunger vs toddlers, and fragility outside training domains.
- Another camp questions what “understanding” even is, arguing that if a system consistently predicts, explains, and generalizes, the distinction between imitation and understanding becomes fuzzy or goal-dependent.
- Several note that humans also operate via pattern-following and compressed internal models; some suggest our own sense of understanding may be an illusion.
Universal Grammar, “Impossible Languages,” and Linguistics
- Chomsky’s long-standing program: humans have an innate language faculty that can acquire only a restricted class of “possible” (hierarchical) languages; some artificially constructed “linear” languages are easy for machines but hard for humans.
- Supporters argue this shows LLMs are not good scientific models of human language acquisition, even if they are powerful tools.
- Critics respond that:
- LLMs clearly internalize rich syntactic structure (attention heads matching parse trees, typological clustering, etc.).
- Some recent work claims LLMs don’t learn “impossible” languages as easily as natural ones, though this is contested.
- The empirical success of purely data-driven models weakens the necessity of a hardwired universal grammar, or at least shifts the burden of proof.
Reasoning, Capability, and Limits
- Debate over whether LLMs “reason” or merely approximate it:
- Examples are given of correct numerical and physical reasoning; others counter with classic failures (weights, simple logic, code errors).
- Many stress that we don’t yet know when they reason reliably, which is the key safety and trust issue.
- Some see LLMs as transformative “bad reasoning machines” that are already useful and rapidly improving; others see them as expensive toys being overhyped by corporate interests.
Politics, Ideology, and Disciplinary Turf
- Several comments tie Chomsky’s skepticism to his linguistic commitments (universal grammar, nativism) more than his left politics; others point out that his core concern is explanation of human language, not beating benchmarks.
- There’s visible friction between:
- ML/engineering culture excited by capabilities and emergent behavior.
- Linguistics/“ivory tower” culture emphasizing formal theories, falsifiability, and caution about equating performance with explanation.
- Some argue AI skepticism on the left is partly anti-corporate; others warn that dismissing LLMs to “oppose tech” risks irrelevance as these tools diffuse into everything.