LLMs are the ultimate demoware
LLMs as Demoware & Memetic Tech
- Several comments agree with the “demoware” framing: LLMs produce dazzling, open‑ended demos that invite grand fantasies, but often fail as consistent daily tools.
- Their fluent language leads people to overgeneralize from a few impressive contexts to “it must be good at everything.”
- One view is that models are being deliberately tuned to be convincing and agreeable, making them “memetic parasites” that sell themselves, akin to cigarettes optimized for addictiveness.
AGI, Hype, and Limits
- Defenders argue the article is already outdated: LLMs have improved steadily, reasoning add‑ons are coming, and LLMs will likely be a base layer for AGI.
- Skeptics ask for concrete evidence AGI is “soon” or even inevitable, beyond existence proofs like the human brain.
- Comparisons are drawn to earlier techno-optimism (moon landings → “commercial space soon”), where a plausible path existed; for AGI, commenters say the path is unclear.
Evidence of Real-World Value
- Many report using LLMs daily to “create value,” especially in software, but others say they don’t see a visible open‑source renaissance or AI‑driven libraries.
- One attempt to measure impact: counting LLM-tagged GitHub PRs and merged ratios (roughly 60–85% accepted), suggesting widespread use.
- Critics note this may just show lots of “slop commits” and that PR/LOC counts don’t reliably measure value.
Force Multiplier, Not Replacement
- Common theme: LLMs act like a “nail gun” or “shovel” – powerful for skilled users, hazardous or net-negative for novices.
- Some worry LLMs may reduce output from weaker practitioners even as they boost top performers.
- Experienced developers describe using LLMs to generate migrations or features while they context-switch, treating the model as an asynchronous assistant rather than an autonomous coder.
Math Tutoring and Education
- One detailed account describes months of successful use of frontier models as a math tutor alongside a structured course, citing:
- Tailored explanations, iterative clarification,
- Step-by-step error checking,
- On-demand problem generation.
- Others, especially math educators, are more cautious:
- LLM explanations can be wrong, shallow, or “Pavlovian,” especially off the training “happy path.”
- Students often misjudge learning; “impression of understanding” ≠ mastery.
- A counterpoint stresses math’s verifiability: much of the output (problems, worked solutions) can be mechanically checked or benchmarked against a trusted curriculum.
Progress vs Plateau and Historical Parallels
- Some claim they see no improvement over the last year and argue we may have squeezed most gains from current approaches; marketing intensity (GPU financial games, omnipresent Copilot ads) is taken as a red flag that this is demoware.
- Others insist models have dramatically improved, especially in reasoning and competition benchmarks, and that dismissing them now resembles past skepticism of IDEs, 4GLs, and no‑code tools.
- The “sigmoid not parabola” comment encapsulates a worry that growth will slow sharply after early spectacular demos.