AI is Dunning-Kruger as a service
Dunning–Kruger vs What AI Actually Does
- Multiple commenters argue the title misapplies Dunning–Kruger: the original research is about people misjudging their own competence, not about being fed incorrect information.
- Others say the DK meme has devolved into a generic insult (“too dumb to know they’re dumb”) and is being used that way against AI users.
- Several point to Gell-Mann amnesia / Knoll’s Law as a better frame: people see AI be wrong in domains they know, but still trust it in domains they don’t.
How LLMs Mislead (and Who’s at Fault)
- Strong theme: LLMs answer with high confidence, making it hard for non-experts to spot errors; this is framed as “being fooled” rather than “being a fool.”
- Some say it’s unreasonable to expect every user to reliably detect mistakes, especially when tools are marketed as search replacements.
- Others insist it’s foolish to treat any LLM output as fact and place responsibility on users to verify.
“Safe” vs “Unsafe” Use Cases
- Many see LLMs as fine for low‑stakes or “lorem ipsum” tasks: placeholder images, mock dashboards, quick scripts, game character names, boilerplate code.
- Pushback: even “small” uses (images with extra fingers, insecure dashboards) can signal sloppiness or introduce real risks.
- Several developers report huge productivity gains for refactoring, test conversion, bug-hunting, and tedious plumbing—provided you already understand the domain and review outputs carefully.
Regulation, Access, and Externalities
- One proposal: regulate AI use like vehicles, with licenses or aptitude checks. Counterargument: enforcing that would require totalitarian‑style surveillance, especially with local models.
- Some worry about environmental and societal externalities (CO₂, spam, scams, dependence on AI); others see these as outweighed by potential “civilizational payoffs.”
Debating the Science of Dunning–Kruger
- Long subthreads dissect the original DK paper: small samples, questionable tasks (e.g., joke rating), and claims it may be partly statistical artifact.
- Several note that popular DK graphs about confidence over time don’t actually match the original data and may be pop-psych oversimplifications.
- Meta‑irony is noted: misusing DK to critique AI may itself be a DK-like overconfidence about the effect.
Impact on Expertise and Power
- Some see AI as “Brandolini’s Law as a service”: it floods organizations with plausible nonsense that experts must then debunk.
- Others worry AI will let incompetent leaders bypass experts with “good enough” answers, reinforcing existing power structures and eroding real competence.