Remarks on AI from NZ

Ecology, “Eye-Mites,” and Human Dominance

  • The “eyelash mite” analogy spurs debate about who depends on whom: some read it as humans eventually subsisting on AI byproducts; others note today’s AIs are utterly dependent on human-built energy, data, and hardware.
  • The claim that humans have a “stable position” among other intelligences is challenged as euphemistic: commenters point to massive biodiversity loss, human-driven megafauna extinctions, livestock-dominated biomass, and industrial agriculture.
  • New Zealand is discussed as a partial counterexample (significant protected land), but even there farmable land is limited and mostly pastoral.

Human Dependence, Skill Atrophy, and the Eloi

  • Some doubt a future where everyone becomes Eloi-like mental weaklings: human curiosity and archival knowledge make total loss of understanding unlikely.
  • Others argue complexity is already outrunning individual comprehension; LLMs can be world‑class teachers for the motivated, but research and anecdotes suggest weaker students get worse and skills can rapidly degrade when people “drink the Kool‑Aid.”
  • Several worry more about fragile high-tech dependencies (biotech, antibiotics, critical factories) and social overreliance than about sci‑fi rebellion scenarios.

Corporations, Collective Minds, and Proto-AI

  • A recurring analogy: corporations, militaries, and governments already behave like non-human intelligences with their own goals (profit, power, influence), built from many humans.
  • Some extend this to say we already live with a form of “ASI”: large institutions can pursue complex goals beyond any individual’s capacity.
  • Others reject this as definitional sleight of hand: institutions make obvious errors, are bounded by their smartest members and slow communication; this is nothing like a truly superhuman, unified intelligence.

Superintelligence Risk: Extinction vs Managed Coexistence

  • One camp: truly superhuman, unaligned AI almost guarantees human extinction or irrelevance; nature’s “competition” analogy is misleading, as nothing today rivals humans the way ASI could.
  • Countercamp: fictional models (e.g., benevolent superintelligent governors) show plausible coexistence if systems are explicitly aligned with human flourishing.
  • Disagreements center on whether alignment by default is realistic, whether fears resemble religious eschatology, and how much imagination we should give to catastrophic scenarios (from microdrones to mundane, legalistic disempowerment via ubiquitous automation).

Media Theory: Augmentation as Amputation

  • McLuhan’s idea that every technological extension is also an amputation is embraced and elaborated: LLMs are seen as a new medium, not just fancy books or search.
  • Commenters worry about which human faculties atrophy—mathematical reasoning, tolerance for boredom, and other cognitive “muscles”—as we offload more to AI.
  • There’s concern that, as with other media, we may gradually strip away parts of the self and become servomechanisms of our tools.

Work, Inequality, and Creative Fields

  • Designers push back hard on the suggestion that AI is just a helpful tool: they expect that once AI-generated work is “good enough,” companies will simply lay them off.
  • Tension emerges between “AI as toil reducer” and “AI as direct replacement,” with skepticism that individual empowerment can balance corporate incentives.
  • Another thread worries about concentrated compute and capital: frontier AI may remain under control of a small elite, driving a massive, possibly permanent, power and wealth imbalance.

Process and Legitimacy of the Conversation

  • New Zealand readers express frustration that such discussions occur in closed, elite settings nearby without their awareness, reinforcing a sense of non-overlapping bubbles shaping policy.
  • Others defend formats like Chatham House rules as promoting frank discussion, while acknowledging the optics of exclusivity.