Thinking Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning
Role of “System 3” / AI in Reasoning
- AI is framed as a de facto “System 3”: an external reasoning layer that people increasingly defer to.
- Commenters note AI introduces its own “cognitive biases,” shaped by training data, marketing, and culture.
- Some argue this isn’t fundamentally different from asking another human; others say AI is uniquely opaque and confidently wrong, making errors easier to miss.
- One view: AI doesn’t add a new system, it just moves existing cognition into “autopilot” and hides the struggle.
Impact on Individual Cognition
- Several users report feeling cognitively stronger: more high‑level thinking, better problem solving, and “rubber‑ducking” into their own insights through dialogue with models.
- Others see AI as an “amplifier”: it boosts those who already think deeply, while many users get lazier or struggle to use it effectively.
- A strong opposing view: delegating core thinking to LLMs inevitably weakens critical thinking, likened to offshoring manufacturing and later realizing the skills are gone.
Tools, Atrophy, and Historical Analogies
- Comparisons to calculators, GPS, cars, and databases:
- Pro‑AI side: tools free us from low‑level work and historically leave us better off.
- Cautious side: overuse leads to atrophy (mental arithmetic, navigation, physical fitness); AI should be used with similar discipline as diet or exercise.
- Some intentionally avoid calculators/GPS to preserve “mental muscle.”
Reliability, Use Cases, and Verification
- Multiple comments stress that LLMs must be treated unlike calculators or CPUs: they are probabilistic and wrong far more often.
- Suggested safe uses: tasks that are easy to verify, tasks you already understand, one-step-beyond-your-skill learning, or low‑stakes outputs.
- Subtle but plausible errors (e.g., color spaces) show how easily non‑experts can be misled.
- Users describe strategies like multi‑model cross‑checks and carefully phrased prompts, but admit this is frustrating.
Social and Epistemic Effects
- Concern that AI-written text makes “everyone sound like an expert,” eroding cues for real depth and expertise.
- Worry that attention-optimizing LLMs resemble addictive feeds, blurring usefulness and manipulation.
- Speculative fears: AI could accelerate a long‑term “dumbing down,” contribute to an “Idiocracy” scenario, or even be part of Fermi‑paradox style collapse.
- Others counter that AI is already matching or exceeding humans in some domains (e.g., coding, math), and denial is identity-protective.
Foundations and the Paper Itself
- Some note that classic System 1/System 2 work has replication and theoretical critiques, which may weaken the paper’s conceptual basis.
- The study’s specific finding highlighted: AI improves performance when it’s right but reliably degrades it when it’s wrong, even under time pressure and incentives.
- Several suspect parts of the paper were written with AI; opinions vary on whether that undermines its trustworthiness or is simply the new normal.