Deciphering language processing in the human brain through LLM representations

Excitement about understanding and “hacking” the brain

  • Several commenters are enthusiastic, hoping this work leads to deeper models of the brain and eventually “hackable” cognition: better motivation, faster learning, pain control, etc.
  • Others warn that if you can edit your own brain, others can too; existing “hacks” via drugs, behavior modification, and stimulation are noted.
  • Some recommend nootropics or meditation as current, low-tech brain tuning.

Are LLMs more than “stochastic parrots”?

  • Supportive view: alignment between LLM representations and neural activity in speech/language areas is taken as evidence that models capture world structure similarly to humans, beyond mere parroting.
  • Critics argue the correlations are modest (0.25–0.5) and based on strong assumptions, so they don’t demonstrate equivalence to brains.
  • One line of thought: this may show instead that humans are themselves closer to “stochastic parrots” than we like to admit.

Thinking vs language processing

  • Disagreement over whether language processing and “thinking” are separable faculties.
  • One side cites cases from neuroscience and linguistics (e.g., savants, synthetic language learning) to argue language is a distinct capacity interfacing with more general cognition.
  • Others argue in practice they are deeply intertwined and not cleanly separable in brain activity.

Grammar, rules, and probabilistic language

  • Long debate over whether human language is fundamentally rule/grammar-based (Chomskyan view) or best seen as probabilistic/statistical.
  • Some argue grammars are “useful fictions” or lossy models; real language is messy and probabilistic, closer to how LLMs operate.
  • Others counter that languages obey non-arbitrary structural constraints (e.g., structure dependence) that imply an underlying rule system, even if not fully characterized.
  • Evidence about processing difficulty for rare or unexpected constructions is interpreted by one side as proof of probabilistic processing; the other says this reflects prediction and surprise layered on top of a deterministic parser.

Methodology, statistics, and embeddings

  • Concerns about small subject numbers, brief stimuli (e.g., a single podcast), and highly contrived geometric interpretations of neural data (“brain embeddings”).
  • Skepticism that modest correlations justify strong claims; some suggest correlation is a poor dependence measure here and that entropy-based metrics might be better.
  • A technical worry is that high-dimensional LLM activations, especially if “few-hot,” may be linearly fit to many signals (including random or unrelated ones), risking sophisticated p-hacking. Prior work showing even random LLMs can be fit to brain data is cited.
  • Others respond that if no shared structure existed, such alignment methods simply wouldn’t work at all.

Brain uploading and identity

  • A side thread speculates whether this line of work moves us toward uploading minds and “killing death.”
  • Disagreement over whether an upload is “you” or merely a copy; analogies to sleep vs rebooting.
  • Some argue you also need to simulate bodily states and hormones to preserve motivations; others reply those motivations can, in principle, be copied or modified.
  • Gradual vs discontinuous upload is discussed, with the claim that if the end brain state is identical, the process may not matter for identity.

Novelty and prior work

  • Commenters note similar transformer–brain correlation papers already exist, so this is seen as incremental rather than groundbreaking.
  • Several emphasize that correlation does not equal causation: at best, these results suggest overlapping computational principles or useful decoding tools, not that LLMs and brains are “the same thing.”