LLMs bring new nature of abstraction – up and sideways

Scope of the “new abstraction” claim

  • Some readers don’t buy that prompting LLMs is a new level of abstraction; it feels more like a different activity entirely, not an abstraction over previous programming work.
  • Others argue it’s a “new nature” of abstraction:
    • Up: expressing intent in natural language, specs, and examples instead of code.
    • Sideways: dealing with probabilistic, non-repeatable behavior rather than deterministic compilation.

Reliability vs solving “harder” problems

  • Supporters say unreliable LLMs can still be worth it if they address problems that were previously too hard or expensive (e.g., “common sense” judgment, messy edge cases, autonomous behavior in hopeless scenarios).
  • Skeptics counter that “90% reasonable, 10% insane” behavior is unacceptable for most production systems; better to fail loudly and fix the root cause.
  • Several report LLMs have not solved problems they couldn’t solve themselves, but they dramatically speed up work—mainly turning solvable problems into faster ones, not fundamentally harder ones.

Non-determinism, determinism, and “practical” predictability

  • Strong debate on whether non-determinism is really “unprecedented”: fuzzing, mutation testing, and earlier ML already introduced it, though mostly outside the core compiler/toolchain.
  • Technically, LLMs can be deterministic (temperature 0, fixed seeds, pinned models/engines), but:
    • Hosted APIs, batching, hardware differences, and implementation quirks often break reproducibility.
    • Even with fixed seeds, tiny prompt changes can lead to drastically different outputs.
  • Several distinguish technical determinism from practical determinism: developers can’t reason about prompt changes with the precision they have for code.

Experiences building with LLMs

  • Practitioners building LLM-based apps report:
    • Minor prompt tweaks causing major behavioral shifts and downstream effects.
    • Context-window failures that silently degrade quality unless you actively manage tokens.
    • Mainstream business users often give up when behavior feels too fuzzy or inconsistent.
  • As coding assistants, LLMs are widely seen as productivity boosters—but they introduce subtle bugs, making tests and strong typing even more important.

Natural language vs formal code

  • Some want “English to bytecode” and treat prompts as source, LLM output as compiled target.
  • Others invoke classic arguments (e.g., Dijkstra) that natural language is inherently imprecise; precision requires formalism and well-defined machine models.
  • A nuanced camp pushes for mixed systems: blend natural language for intent and high-level behavior with traditional code and formal models (e.g., TLA+ + LLM, or languages explicitly designed to interleave NL and symbolic notation).

Skepticism about hype and authorship

  • Several commenters think the “unprecedented” framing and talk of fundamental change are overblown or consultant-driven hype.
  • Others argue that even observers who “only dabble” can provide useful, contextual perspectives—provided their claims about practice are treated with caution.