Claude Code for Infrastructure

Product concept & intended value

  • Fluid is pitched as “Claude Code for infrastructure”: an agent that connects to sandboxed clones of production VMs/Kubernetes, runs commands, edits files, and outputs reproducible IaC (e.g., Ansible playbooks).
  • Goal: give LLMs a realistic environment to explore and test changes instead of guessing from static prompts, while keeping prod locked down.

Comparison to existing tools / “is this solved?”

  • Several commenters say they already use Claude Code (or similar) + Terraform/Pulumi/CloudFormation, sometimes in separate cloud accounts, and don’t see what Fluid adds.
  • Others point to Terraformer, import tools, GitOps, and Pulumi Neo as existing ways to reconstruct or operate infra and let LLMs work safely.
  • Some argue that good IaC plus reverse-import tools are enough; agents should modify IaC directly, not SSH around in sandboxes.

Safety, environments, and prod access

  • Strong pushback against any LLM touching prod; some say their orgs wouldn’t consider it at all.
  • Supporters like the idea of “sandpit” or “lab bench” environments: cloned, disposable, prod-like spaces where agents can break things.
  • Multiple people ask for clear read-only modes, explicit explanation of destructive actions, and guardrails for K8s-like scenarios where agents have previously deleted critical resources.

Cost and resource concerns

  • Several worry about runaway cloud spend: spinning up many sandboxes so agents can “fumble about” is seen as wasteful and a potential path to huge AWS bills.
  • Others note that cloning complex stacks (e.g., app server vs prod database) is non-trivial and under-specified.

UX, docs, and install feedback

  • Repeated criticism that the landing page is vague; commenters found the HN post and README far clearer.
  • Suggestions: better demo, clearer explanation of “production-cloned sandbox,” highlight RO workflows.
  • Security-conscious users dislike curl | bash as the main install mechanism and point out the irony given the product’s safety pitch.

Broader AI/infra meta-discussion

  • Some see this as yet another “AI wrapper” on existing capabilities, part of a shovelware wave of infra tools vs end-user products.
  • Others are enthusiastic about ops/observability as a strong AI-agent use case and think Ansible playbook generation from sandbox experiments is a clever pattern.