The Impact of Generative AI on Critical Thinking [pdf]
What the study actually measures
- Several commenters stress the paper studies self‑reported critical thinking within AI-assisted tasks, not whether AI makes people generally less intelligent.
- Core finding as interpreted: higher trust in GenAI correlates with less perceived critical thinking effort; higher self-confidence correlates with more.
- Some argue this mainly reveals a general human tendency: when a tool gives plausible answers, people stop digging unless they have a clear reason.
Methodological criticisms
- Strong pushback on survey design:
- “Critical thinking” is self-assessed, not objectively tested.
- Items like “I always trust AI” vs “I question AI’s intentions” blur constructs (frequency, trust, and critical thinking intertwined).
- Concerns about small, self-selected sample and lack of control group.
- Some still find the qualitative examples helpful for articulating how work shifts from creation to verification.
AI as cognitive tool: shifting vs losing skills
- One camp: technology historically reallocates cognition (printing press, calculators, GPS, high-level languages). We offload rote skills and move “up the stack” to abstraction and design. Losing obsolete skills can be fine if outcomes improve.
- Opposing camp: some “obsolete” skills (navigation, writing, arithmetic) are still practically and psychologically important (resilience, autonomy, motivation). Tech-driven offloading may contribute to alienation and dependence.
Experiences using LLMs at work
- Mixed utility reports:
- Helpful for boilerplate code, style rewrites, intros, search-like Q&A, and tutoring/quiz generation.
- Others find them slow things down: more time spent “babysitting” or rewriting “AI slop” than coding directly.
- For complex or novel systems, AI often fails; by the time humans take over, the learning curve is steeper.
- Concern that replacing junior work with AI undermines experiential learning.
Verification, trust, and critical thinking
- Strong emphasis that LLM outputs must be verified; hallucinations are frequent, especially dangerous for learners who don’t know what to check.
- Some compare AI to GPS: great for most cases, but failure modes can be severe and people tend to follow it uncritically.
- Others argue AI, like a pry bar, gives “cognitive leverage,” freeing time for higher-level reasoning—if users understand limitations and switch to other tools (books, experts) when precision matters.
- Multiple commenters note that people already over-trusted Google; LLMs accelerate an existing problem of treating convenient outputs as ground truth.
Education and cognitive development
- Worries about students using AI/Grammarly instead of learning spelling, writing, or problem solving, and about a generation habituated to an “easy button.”
- Counterargument that strict spelling and similar norms are largely social conventions whose importance might be overstated.
- Desire for longitudinal studies on how AI changes what skills are learned and which atrophy, rather than just how people feel about thinking effort.
Knowledge ecosystem and “truth decay”
- Some fear that as more sources become AI-generated or polluted by hallucinations, independent verification of facts may become impossible, worsening existing misinformation dynamics.
- This is framed as a potential threat not just to individual critical thinking but to shared knowledge and social trust.
Societal and governance concerns
- Skepticism that even strong evidence of cognitive harm would slow AI deployment, given money and inertia; climate change is cited as an analogy with mixed precedent.
- Concerns about future AR+AI scenarios where people are continuously guided by models, becoming “meat robots”; others note society is already heavily shaped by education, media, and platforms.
- Some see a paradox: AI vendors warn users to verify outputs while simultaneously marketing systems as highly capable, making widespread over-trust predictable.