LLMs are steroids for your Dunning-Kruger
Nature of LLMs: “Just Statistics” vs Emergent Complexity
- Long back‑and‑forth over whether “LLMs are just probabilistic next‑token predictors” is an accurate but shallow description or a dismissive misconception.
- One side: architecture is well understood (transformers, embeddings, gradient descent, huge corpora); they’re “just programs” doing large‑scale statistical modeling. Calling that unimpressive betrays a bias against statistics.
- Other side: knowing transformers ≠ understanding high‑level behavior; emergent properties from massive high‑dimensional function approximation are non‑trivial. Reductionism (“just matmul”) glosses over real conceptual novelty.
- Disagreement over what “understand” means: knowing the rules and training pipeline vs being able to meaningfully model internal representations and behaviors.
Dunning–Kruger, Confidence, and Epistemology
- Multiple commenters note the blog (and much popular discourse) misuses “Dunning–Kruger” as “dumb people are overconfident,” while the original effect is more specific and possibly a statistical artifact.
- LLMs are described as “confidence engines,” “authority simulators,” and even “Dunning–Kruger as a service”: they speak in a fluent, expert tone regardless of truth.
- Some see this as accelerating an existing human weakness: people already trusted TV, newspapers, TED talks, and now have a personalized, endlessly agreeable source.
- Others argue LLMs can also challenge users (e.g., correcting physics misunderstandings, refusing wrong assumptions) and, used skeptically, can sharpen thinking rather than inflate it.
Trust, Hallucination, and Comparison to Wikipedia/Search
- Strong concern about hallucinated facts, references, APIs, and even rockhounding locations or torque specs, delivered with high confidence. “Close enough” is often not good enough.
- LLMs are contrasted with Wikipedia: Wikipedia has citations, edit wars, locking, and versioning; LLMs can’t be “hotpatched” and routinely fabricate sources.
- Some use LLMs as a better search front‑end: great for vocabulary, overviews, and “unknown unknowns”; then verify via traditional search, docs, or books. Others find them terrible for research due to fabricated citations.
Cognitive Offloading, Learning, and Education
- Several people feel “dumber” or fraudulent when relying on LLMs; others feel empowered and faster but worry about skill atrophy, similar to spell‑check or calculators, but applied to reasoning.
- Teachers report students pasting assignments directly into ChatGPT and turning in slop, eroding the signaling value of degrees and making teaching demoralizing.
- Discussion ties this to broader trends: passive learning feels effective but isn’t; LLMs may further separate the feeling of understanding from real competence.
Work, Productivity, and “Bullshit Jobs”
- Mixed reports from practitioners: some claim coding agents are “ridiculously good”; others insist you must audit every line and treat them as untrusted juniors.
- Several see more near‑term impact on email‑driven, management, and “bullshit” office roles than on deep technical work: LLMs can already write better status emails than many humans.
- Tension between using LLMs as tools (like tractors or IDEs) vs outsourcing the entire task and losing the underlying craft.
Broader Concerns and Hopes
- Worries about LLMs as “yes‑men” amplifying delusions (including in psychosis), ideological bubbles, and overconfident ignorance.
- Others hope the sheer weirdness of LLM outputs and visible failures will spark a long‑overdue crisis in how people think about knowledge and sources.
- Many commenters stress a personal discipline pattern: use LLMs for brainstorming, terminology, and alternative views; always verify, and default to skepticism rather than deference.