Agents Are Not Enough

LLMs as UI and Workflow Builders

  • Several comments propose using “LLM as UI”: the human remains the true agent, the LLM is just a front-end to tools/CRUD APIs.
  • A popular variant: use LLMs to generate workflows/macros from natural language, then save and run them deterministically without the LLM.
  • Advocates say this reduces UI complexity, helps onboarding/discoverability, and lets non-experts automate complex apps (e.g., CRM/ERP).

Determinism, DSLs, and Verification

  • Skeptics worry about non-determinism: even at temperature 0, parallelism and floating-point issues can produce variation.
  • Critics argue that if users must verify workflows, they effectively need to understand an underlying DSL; in that case, a GUI or direct DSL may be simpler.
  • Others counter that workflows can be summarized in natural language and tested on sample data; raw DSL exposure might only be needed for power users.
  • Some note that natural language is imprecise; attempts at precision tend to turn into unreadable “legalese.”

Definitions and Hype Around “Agents”

  • The thread notes that “agent” has long been poorly defined and now covers everything from thermostats to LLM-plus-tools.
  • Multiple commenters say the term is blurry or becoming meaningless, similar to “data science,” and is heavily driven by marketing and funding.
  • There is disagreement over whether “agents” should mean any acting program, or only systems with goals, internal world models, and autonomy.

Practical Value and Current Limits of LLM Agents

  • Some see agents as “LLM calls in a loop” with low real-world success rates, expensive token usage, and error compounding.
  • Others think agents will improve as tool-calling and parameter mapping get better, but note current multi-turn accuracy is still weak.
  • A cited view (from another source) is that many tasks are better solved with single LLM calls plus retrieval, not full agents.

Safety and Ecosystem Concerns

  • Prompt injection and secure tool use are seen as unsolved problems for powerful autonomous agents.
  • There is speculation that ecosystems will naturally settle different levels of human vs. machine agency based on economic incentives.

Reception of the Paper Itself

  • Several commenters find the paper vague, high-level, or “academic theater,” especially its cognitive architecture section.
  • Others still find value in its attempt to frame and critique current “agent” narratives, even if underspecified.