MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline
Meta: Link, Hype, and Study Quality
- Many note this thread is a repost; the linked article is from a vaccine-denial site and appears AI-written, with a sensational title that overstates the underlying MIT Media Lab preprint.
- Several urge linking the original arXiv paper and the project site/FAQ instead, which explicitly warn against framing it as “brain rot,” “damage,” or “LLMs make you dumb.”
- Critiques of the study:
- Small, narrow sample (54 mostly Boston-area students/academics), no blinding, EEG-only, and pre–peer review.
- Task is constrained: four 20‑minute essay-writing sessions, sometimes with LLM/search assistance.
- Results show task-specific brain activity patterns, not long‑term cognitive decline.
- Some see it as “clickbait research” that confirms an existing anti-tech narrative.
What the Study Actually Shows (and Doesn’t)
- Main findings discussed:
- LLM users had lower measured cognitive load while writing and much poorer recall of sentences from “their” essays.
- Participants who wrote previous essays unaided then got LLMs showed strong brain engagement when first using the tool.
- Supportive interpretation:
- Writing is thinking; outsourcing composition reduces deep processing and memory formation.
- “Use it or lose it”: offloading demanding tasks (like structuring arguments) will atrophy those skills over time.
- Skeptical interpretation:
- If the AI wrote most of the text, of course people don’t remember it.
- Lower effort looks like reduced load, not necessarily “harm.”
- At most, this shows that using LLMs to cheat on essays undermines learning, not that “AI use reprograms the brain” in general.
Anecdotes: Cognitive Atrophy vs. Augmentation
- Many developers report “vibe coding” with LLMs leaves them unable to explain or debug their own code, and organizational quality suffers when people submit obvious AI slop.
- Others say LLMs are transformative for productivity and learning when used as:
- Tutor, explainer, and code-review assistant.
- Tool for tedious, boilerplate, or build/devops tasks.
- Several feel their own thinking becomes lazier or less engaged when overusing LLMs, even as output volume increases.
Education, Youth, and Long-Term Concerns
- Strong worry about students using LLMs to write essays: they get grades and credentials without building understanding or critical thinking.
- Fears that a cohort will graduate “empty-headed,” widening inequality between those shielded from/using AI carefully and those who outsource everything.
- Others argue every major medium (writing, calculators, GPS, internet) caused similar moral panics and cognitive tradeoffs; LLMs are another offloading step, not uniquely catastrophic.
How to Use LLMs Safely (According to Commenters)
- Keep AI “at arm’s length”: use it like a powerful search engine, editor, or second opinion, not as an autonomous agent.
- Write first, then ask AI to critique, clarify, or refactor; don’t let it generate the whole essay or module.
- In coding, prefer small, verifiable chunks over full-agent PRs; always review and understand outputs.
- For learning, interrogate and check AI answers, then apply them in real work, rather than copy‑pasting solutions.