Install.md: A standard for LLM-executable installation

Purpose and Claimed Benefits

  • Proposed as a predictable, standard location (install.md) for LLM agents to find installation instructions, avoiding sitemap/llms.txt crawling and extra token use.
  • Advocates frame it as “runtime for prose”: human-readable natural-language instructions that agents execute, making author intent more transparent than long shell scripts.
  • Installation is seen as a constrained domain where current LLMs already perform reasonably well, with the hope that standardized prose improves success and UX.
  • Some see this as an early example of a broader shift where prompts/descriptions become the “program,” at least for narrow tasks like installs.

Comparison to Existing Tools

  • Many argue the problem is already solved by package managers, containers, Nix/flake.nix, or configuration tools like Ansible/Puppet/Chef.
  • Installing software is described as something that should remain deterministic and auditable; throwing out decades of devops tooling for markdown+LLM is called “bonkers” by detractors.
  • Several suggest using LLMs to generate or audit install scripts/configs once, not as a runtime every time a user installs.

Security, Determinism, and Reliability

  • Strong concern that this is effectively “curl | bash with extra steps,” now combining script risk with LLM vulnerabilities (prompt injection, hallucinations, randomness).
  • Deterministic shell scripts can be audited, statically analyzed, hashed, and will behave the same across machines; LLM behavior is non-deterministic and model-dependent.
  • Critics emphasize that users must now trust both the author and the LLM’s interpretation, making incidents harder to debug and responsibility murkier.

Readability vs Precision

  • Proponents say prose instructions (e.g., for installing a tool like bun) are shorter and easier for users to understand at a glance than a multi-hundred-line script.
  • Opponents counter that good code is already the clearest description of behavior; prose is inherently ambiguous and context-sensitive, and LLMs are no more trustworthy than a random human following a how-to.

Hybrid and Alternative Approaches

  • Suggested compromise:
    • Keep conventional installers, plus an LLM-oriented doc/knowledge base for troubleshooting.
    • Or use install.md purely as input for generating an install.sh the user can audit and reuse deterministically.
  • Some experimentation tools (e.g., claude-run/remote execution) are cited, but many commenters insist this belongs in sandboxed or toy environments, not standard practice.

Meta and Reception

  • The original line “installing software should be left to AI” drew heavy backlash and was later toned down.
  • Overall sentiment in the thread skews strongly skeptical, with a minority genuinely excited to explore “executable markdown” and prose-as-runtime ideas.