What happens when people don't understand how AI works
Perceptions of AI Progress and Future Decline
- Some see little practical coding difference between recent Claude versions and doubt rapid future gains; others report steady, noticeable improvements but far from perfection.
- The cliché “the AI you use today is the worst you’ll ever use” is called vacuous; several argue LLM capability curves may already be flattening.
- Many expect quality of service to degrade even if raw capability grows: enshittification via ads, paywalling, political/monetization bias, and lock-in, compared to Google Search and the wider web’s decline.
- A minority believe current LLMs may already be the best we get in practice, before business incentives corrupt them.
Psychological and Spiritual Misuse
- The “ChatGPT-induced psychosis” phenomenon alarms commenters: vulnerable, lonely, or psychotic users treating LLMs as gods, spiritual guides, or self-aware beings.
- Others say psychosis will always latch onto something (religion, social media, conspiracies); LLMs are just a new “force multiplier.”
- Some argue people have always worshiped man-made abstractions (state, leaders, texts); AI is just the latest idol.
LLMs as Tools vs Oracles
- One camp uses LLMs as better search/summarization/coding tools: quick terminology lookup, domain overviews, SQLAlchemy snippets, law-like rules, etc., with external verification.
- Another warns that many non-technical users assume factuality and don’t know about hallucinations, effectively treating chatbots as oracles.
- This fuels a debate over calling LLMs “divinatory instruments”: critics say the analogy is overbroad and obscures differences from ordinary information retrieval; supporters say it captures how many people experience the interface.
What Counts as “Thinking” or “Understanding”?
- Long arguments revolve around whether next-token prediction can be called “thinking.”
- Some stress LLMs lack grounding, embodiment, goals, and rich world models; they see outputs as statistically fluent but ontologically empty.
- Others lean functionalist: if behavior is indistinguishable from human answers in many domains (Turing-style), insisting it’s “not real understanding” is seen as semantics or human exceptionalism.
- Related disputes touch on consciousness, free will, animal cognition, and whether all symbolic communication involves projection and interpretation.
How LLMs Actually Work
- Several note that “trained on the internet” is incomplete: modern chat models crucially depend on supervised fine-tuning and RLHF from vast global workforces of labelers rating style, safety, and “emotional intelligence.”
- This reframes chatbot niceness and apparent empathy as distilled human labor, not emergent soul.
- Others push back that, despite human shaping, transformers still rely on large-scale pattern learning, not classical symbolic reasoning; there’s disagreement about how far beyond “pattern matching” current systems really go.
Impact on Work and Institutions
- Many describe LLMs as force multipliers for already-competent people, not replacements for missing expertise.
- There’s concern they’ll be misused by clueless management as a substitute for skilled staff, leading to layoffs, brittle systems, and an “idiocy multiplier.”
- Skeptics emphasize that organizations still need deep human understanding; AI cannot rescue fundamentally bad leadership.
Language, Hype, and Public Understanding
- Repeated concern that anthropomorphic marketing terms (“AI,” “reasoning,” “hallucination,” “agents,” “friends”) mislead the public and investors about capabilities and risks.
- Some urge more precise language (LLM, pattern model, summarizer) and better education so people treat outputs as provisional, checkable suggestions rather than truths or revelations.