LLM Problems Observed in Humans
Satire, tone, and intent of the article
- Several readers can’t tell if the piece is earnest or satirical.
- Some read it as straight: a critique of how human cognitive flaws mirror “LLM problems,” implying benchmarks used to dismiss LLMs could also make humans look unintelligent.
- Others interpret it as satire of the type of person who prefers LLMs to people, mocking the idea that “good conversation” is when someone mirrors you like a chatbot.
- The “upgrade the human brain” and “LLMs are better conversational partners” lines are seen by some as darkly funny; by others as sociopathic or dehumanizing.
Human vs LLM intelligence and Turing tests
- Debate over whether modern LLMs have already “passed” some form of the Turing test; some say yes, but stress there is no single canonical test.
- Point that comparing LLMs to “humans” is underspecified: are we comparing to average people, professionals, or top experts?
- Some argue LLMs now exceed a large portion of the population in general knowledge and conversation fluency, complicating Turing-style distinctions.
- Others insist human and LLM intelligence are qualitatively different (embodied, goal-directed, accountable vs. disembodied text prediction).
Shared failure modes: humans and LLMs
- Commenters note clear overlaps: limited “context windows,” repetition/loops, shallow reasoning, confabulation, not understanding humor, and reward-hacking via social approval.
- Some think highlighting human fallibility is meaningful; others say the analogy is too broad (“gorillas crash planes too”) and hides crucial differences.
- There’s support for replacing “hallucination” with the psychological term “confabulation.”
Experiences interacting with LLMs
- Mixed reports: some say LLMs argue too much or inject “weird ideas”; others say they are overly agreeable and prone to folie-à-deux with users.
- Suggested strategies: keep prompts succinct, avoid arguing or “teaching” the model, reset context when it goes off the rails.
- Several users report being corrected by LLMs and later realizing the model was right, increasing their deference in some domains.
Social and ethical concerns
- Many criticize the article’s implied view of people as low-quality, upgradable resources rather than reciprocal partners.
- Worry that preferring LLMs for “connection” further erodes patience for normal human limitations.
- Broader concern that artificial companionship, like earlier digital substitutes (e.g., porn), could reduce in-person socialization and reproduction, with unclear evolutionary and societal consequences.