Talking to LLMs has improved my thinking

Perceived benefits for thinking and learning

  • Many commenters report similar experiences to the article: LLMs help crystallize half‑formed ideas, name concepts, surface prior work, and provide starting points for deeper research.
  • They’re seen as a patient, always‑available “expert” across many domains (math, DSP, history, philosophy, emotional dynamics), especially valuable for autodidacts without access to mentors.
  • Large context windows and multimodal models let people “throw the book at it” or explore visual creativity, making previously boring or forbidding topics (e.g., writing, advanced math) feel approachable.

Rubber-ducking, writing, and cognition

  • Strong agreement that LLMs function as upgraded rubber ducks: explaining a problem forces structure, revealing gaps in understanding.
  • Some see them as an accelerant to the longstanding “writing is thinking” effect: faster iteration, more feedback, more probing of intuitions.
  • Others argue the core improvement still comes from thinking/writing itself; LLMs are just a conversational interface to that process.

Limitations, hallucinations, and cognitive debt

  • Several warn that LLM answers are often subtly wrong; for curiosity‑only usage this may still be fine, but others argue a wrong answer can be worse than no answer.
  • Concerns about “cognitive debt”: outsourcing framing and explanation can erode originality, give false confidence in vague intuitions, or leave people defending ideas they can’t reason about.
  • Some say LLMs tend to produce polished, generic framings that miss the point; the struggle to articulate ideas yourself is seen as where much of the value lies.

Ownership, monetization, and control of LLMs

  • Widespread worry about future enshittification: models nudging users toward products, beliefs, or political narratives.
  • Debate over open‑source vs proprietary frontier models: optimism that local models will improve, but acknowledgment that private data and tooling (e.g., integrated code execution) may keep big vendors ahead.
  • Proposals include government‑funded “public infrastructure” LLMs, met with sharp disagreement over state propaganda risks; alternatives suggested include nonprofit, Wikipedia‑like “open WikiBrain” models.
  • Meta‑concerns: how to verify downloaded or “uncensored” models aren’t covertly biased; possibility of deceptive alignment; even distrust that communities evaluating models aren’t astroturfed.

Quality, analogies, and usage patterns

  • Coffee analogy: LLMs as cheap, ubiquitous productivity aids; critics note both coffee and models vary hugely in quality and can foster dependence.
  • Techniques to use LLMs well: treat them as sparring partners, explicitly request criticism, maintain “agent spec” files (e.g., agent.md) to reduce unwanted assumptions, always apply human scrutiny.

Education, institutions, and social effects

  • Some claim institutions became partially obsolete with the internet and see LLMs as another step toward self‑education; others emphasize their biggest value precisely for those outside formal education.
  • Split views on whether LLMs will improve expressive ability or encourage sloppy, unstructured language the way spell‑check weakened spelling skills.
  • Noted social upside: LLMs provide low‑pressure dialogue free of status and social anxieties, which can make reflective thinking easier for some users.

Authenticity and style skepticism

  • Multiple commenters suspect the article itself was partially LLM‑written based on phrasing patterns; others criticize the prose as muddled and question taking thinking‑advice from it.
  • There is also discomfort with AI‑generated comments in the thread itself, reinforcing unease about blurred boundaries between human and machine contributions.