Making Wolfram tech available as a foundation tool for LLM systems

Reactions to the Article and Writing Style

  • Several readers enjoyed the piece and see the author as an original thinker with a long AI/computation history.
  • Others found the post self-aggrandizing and “all marketing,” more about naming and selling “CAG” than about new ideas.
  • A big side-thread fixates on writing style: heavy em-dash usage and “it’s not just X, it’s Y” constructions led some to suspect “AI slop.”
  • Others point out this style long predates LLMs and is idiosyncratically human, if verbose; some found it genuinely fun and conversational.
  • Orwell’s argument against stale, prefab phrases is invoked as newly relevant in the LLM era.

Is Wolfram Tech Actually Useful for LLMs?

  • Users who wired Claude/agents into Wolfram report worse performance than Python for many tasks: slower, poorer answers, less training data.
  • Consensus: Python+SymPy (and related libraries) is better for most “internet/application” tasks.
  • Wolfram’s clear edge is seen in advanced symbolic computation: exact algebra, difficult integrals, special functions, series, and equation solving over specific domains.
  • Question remains whether LLM use cases hit those hard-symbolic niches often enough to justify extra cost/complexity.

CAG vs RAG and the Role of Deterministic Computation

  • CAG is viewed by some as mostly a new label for “LLM as natural-language front-end to a computation engine” (something many already do with Python sandboxes).
  • Supporters argue the real value is correctness: for safety‑critical math (engineering, dosing, finance) you want deterministic engines, not probabilistic reasoning.
  • Skeptics say math is finite and stable enough to be embedded directly into general or math‑tuned LLMs; an extra “Wolfram layer” feels unnecessary or like lock‑in.
  • Some ask what’s “infinite” about CAG versus “just call the Wolfram API,” finding that part of the pitch unclear.

Open Source, Science, and Proprietary Math Software

  • Large debate over whether proprietary CAS systems are “against the spirit of science.”
  • One side: software is near‑zero marginal cost; public money should fund open alternatives (SymPy, Sage, etc.) and AI could help implement missing advanced algorithms.
  • The other: people need salaries; historically, science has shared methods but not free labs, and commercial CAS fills that “lab” role.
  • There’s criticism of Wolfram’s restrictive, per‑core licensing and weak ecosystem compared to the Python world, and calls for institutional funding of open scientific computing.

Sandboxing, Open Implementations, and Ecosystem

  • Secure sandboxing is flagged as essential for any computation‑augmented LLM; Python has evolving tooling, and it’s unclear how mature Wolfram’s story is.
  • An open-source Wolfram Language interpreter (WASM-based) and other Mathematica-like projects (Mathics, Sage integration, etc.) are mentioned; they aim to re‑create both language and large parts of the standard library.
  • Commenters emphasize that much of Mathematica’s value lies in its huge, coherent standard library and curated data, not just the core language.

Adoption, Timing, and Business/UX Critiques

  • Some argue Wolfram’s closed nature doomed it as a “foundation tool” for LLMs; if it had been opened a decade ago, it might already be ubiquitous in model training and tooling.
  • Counterpoint: open‑sourcing earlier would likely have sacrificed years of revenue and slowed development.
  • Several see Mathematica as niche (more like “Excel for math” than a general programming platform), which may explain why open clones still lag.
  • Users complain that Wolfram’s product and licensing lineup is confusing; they want a simple, all‑in bundle instead of multiple SKUs and unclear integration paths (e.g., for MCP with existing local licenses).