Using secondary school maths to demystify AI

Article & Title Reception

  • Many feel the post underdelivers on its headline; it’s mostly a report on workshops, with little actual math or technical depth.
  • The original “AI systems don’t think” framing is seen as provocative and distracting; some welcome the later, softer rewording, others say the article still leans too hard on that claim without defining “think”.

Teaching AI with School Maths

  • Several commenters like the idea of using ANNs/ML examples to teach secondary-school math and demystify AI.
  • Others criticize the chosen examples (e.g. traffic-light classification) as unrealistic or conceptually sloppy, and hope future curricula will use better-grounded problems.
  • Some wish they’d been taught neural nets earlier, contrasting this with older AI courses that dismissed ANNs in favor of other methods.

What Does It Mean for AI to “Think”?

  • A long thread debates definitions: thinking vs reasoning vs computation vs consciousness.
  • One view: AI is “just maths” and computation; humans are “just biology/physics”; in both cases that doesn’t settle whether there is “thinking” or consciousness.
  • Another view: without a clear, testable definition of “thinking”, blanket claims (“AI does/doesn’t think”) are unfalsifiable and mostly rhetorical.
  • Functionalist, substrate-independent positions (brains and computers can realize the same processes) clash with views that brains do something qualitatively different or not yet mathematically formalized.

Turing Test, Chinese Room, and Thought Experiments

  • Some argue that modern LLMs can effectively pass Turing-like tests, at least over finite conversations; others say it’s still easy to expose them if you know what to probe.
  • The Turing Test is criticized as less discussed just as systems become competitive at it.
  • The Chinese Room thought experiment is revisited: some see it as useless or question-begging; others see it as a live challenge to claims that symbol manipulation equals understanding.
  • Pen-and-paper and brain-simulation arguments (Church–Turing, simulations vs reality, map vs territory) lead to disputes about whether simulating a brain would yield genuine consciousness.

Limits, Capabilities, and Anthropomorphism

  • Capabilities cited: strong performance in math/programming contests, code generation, few-shot learning in-context, emergent computation inside transformers.
  • Limits cited: inability to robustly correct its own reasoning, dependence on training distributions, hallucinations, high energy use, weak arithmetic without tools.
  • Some argue anthropomorphic language (“LLMs are dishonest”, “they believe…”) and commercial AI hype mislead the public into over-ascribing agency or thought.
  • Others argue that, regardless of labels, these systems already match or exceed humans on many tasks, and the human–machine gap may be narrower than people want to admit.