AI might yet follow the path of previous technological revolutions
Is AI “normal technology” or something else?
- Many argue current AI, especially LLMs, is an incremental advance on decades-old techniques, now scaled up with more data and compute. From this view, it’s a “normal” general-purpose technology whose impact will diffuse slowly and unevenly.
- Others counter that the unusual thing now is approaching (and sometimes exceeding) human-level performance in key cognitive tasks, which could have qualitatively different economic and social consequences than past automation.
- The “explosive” scenario (self-improving AI leading to a singularity) is widely debated: some see no evidence of exponential self-improvement; others say it’s too early to rule out, but caution against inevitability arguments based on pure possibility.
Capabilities, limitations, and whether LLMs “think”
- One camp treats LLMs as “calculators/word synthesizers/statistical interpolators” that lack understanding, motivation, memory, and robust reasoning; they require human supervision and often hallucinate.
- Another camp notes that we don’t fully understand human cognition either, so confidently declaring LLMs “non-thinking” is premature, especially as they keep acquiring abilities once thought impossible for them.
- Sub-debates cover:
- Need (or not) for intrinsic motivation, embodiment, or qualia for “intelligence.”
- Weaknesses in long-term planning, mathematics, games, and consistent rule-following.
- Jagged capability profiles (superhuman in some niches, poor in others) and the risk of “capability overhang.”
Economic, social, and ecological stakes
- Some see AI as comparable to spreadsheets or the internet: transformative but ultimately mundane, mainly boosting productivity (drafting text/code, analytics, support automation).
- Others emphasize novel risks: mass propaganda, offloading critical thinking, ecological strain (energy use), and compounded systemic shocks alongside climate and geopolitical risks.
- There’s disagreement over whether regulation is a drag on useful deployment or a necessary constraint on bias, data misuse, and safety.
Historical analogies and diffusion
- Comparisons made to electricity, motors, cars, social media, cloud, and prior “computers aren’t pulling their weight” eras; many expect an S-curve of adoption and overestimation in the short term, underestimation in the long term.
- Some stress that AI’s self-managing potential (perceive context, correct itself) could break past patterns; skeptics reply that present systems still fall well short of that.
Terminology, hype, and real use
- Long-running ambiguity over “AI,” “AGI,” and “intelligence” fuels confusion and marketing hype.
- Several commenters want to treat LLMs as powerful but non-magical tools for search replacement, support, data analysis, and agents—likely to become as boring and embedded as Office, not an immediate civilisation-ending force.