Ask HN: How do you safely give LLMs SSH/DB access?
Overall Consensus on “Safe” SSH/DB Access
- Many argue you fundamentally cannot make raw SSH/DB access “safe” for an LLM, especially in production.
- Strong sentiment that this is the “worst idea possible” and you simply don’t do it for prod resources.
- LLMs are non‑deterministic; if they have power to break things, they eventually will, regardless of instructions or prompts.
Apply Standard Least-Privilege Security
- Treat the LLM like any other untrusted user: separate OS account, least-privilege permissions, no prod keys.
- Use DB permissions: read-only users, table/column/row-level security, views, prepared statements.
- For SSH: dedicated low-priv users, restricted shells (rbash),
ForceCommand/authorized_keyswrappers, sudo/doas with tight rules, jump hosts, read-only sshfs mounts.
Isolation, Sandboxing, and Versioned Data
- Run agents in containers/VMs or disposable environments you’re happy to “throw into the ocean.”
- Use read-only DB replicas or dev/staging copies; some use version-controlled/branching databases and copy‑on‑write clones so agents can modify a branch and humans later merge.
- Several products and projects are mentioned that automate DB branching/sandboxing.
Tool/Proxy Layers Instead of Direct Access
- Replace raw SSH/SQL with tools the agent can call: each tool implements a tightly scoped, deterministic action.
- Use proxies or MCP servers between agent and DB/SSH that:
- enforce allowlists at the protocol/query level,
- parse/validate SQL to only permit a safe subset,
- hide metadata queries, and
- apply budgeting/limits on large reads.
- Step-scoped permissions (per action) are preferred over long-lived global access.
Autonomy vs Human Review
- Recommended pattern: LLM generates scripts or config changes; humans review, commit, and apply via existing automation (Ansible, Terraform, etc.).
- Some run LLMs freely only on dev/staging, or on cloned DBs, and then “replay” approved changes to prod.
PII and Compliance Concerns
- Multiple comments: do not expose PII or auth data to cloud LLMs without contracts and strong controls.
- Use column/row-level security, masking, or redaction tools; many consider even read access to prod PII a hard “no.”