MCP doesn't need tools, it needs code
Abbreviations, Audience, and Gatekeeping
- Several commenters object to “MCP” in the title without initial expansion; others argue the article was updated to fix this and was always aimed at people already using MCP.
- There’s debate over whether “if you don’t know the acronym, the article isn’t for you” is reasonable targeting or textbook gatekeeping.
- A side thread covers best practices for introducing initialisms (spell out once + parentheses) and why HTML
<abbr>is not a full substitute, especially on mobile.
What MCP Is Supposed to Add
- Supporters say the main value is capability/endpoint discovery and a uniform calling interface: the client discovers tools and their descriptions dynamically instead of hard‑wiring specs into prompts.
- Compared to OpenAPI/Swagger, MCP tools are framed around what they “do” for an LLM, not an exhaustive machine‑oriented API surface, and can be curated or composed.
- For stateful workflows (e.g., browser automation), tying tools to conversation state is cited as a reason MCP might be preferable to plain HTTP APIs or gRPC.
Code Execution vs Tools
- Many agree with the article’s thesis: giving the model a single “uber‑tool” (Python/JS eval in a sandbox) can be more powerful and closer to what models are trained on than dozens of fine‑grained MCP tools.
- Commenters note LLMs “natively” know bash, HTTP, and code patterns from training, but must be carefully prompted to use bespoke MCP tools, which can degrade behavior.
Security and Sandboxing
- Strong pushback on “just run eval()”: people see it as remote code execution, especially dangerous when driven by user input or external models.
- Others describe running assistants in containers/Guix/Bubblewrap and advocate object‑capability style sandboxes and network segmentation as minimum hygiene.
- MCP itself is seen as neither secure nor insecure; risk comes from exposing powerful tools (shell, package managers, internet) without strict scoping.
Tool Explosion and Practical Limits
- Experience reports say that beyond ~30 tools, models choose the wrong tool often; with ~100 tools, behavior degrades badly.
- Suggested mitigations: fewer tools, sub‑agents with disjoint tool sets, or tools that dynamically activate subsets.
- Some see MCP tools more as guardrails/fettering than “connecting your model to the world,” which can be positive for narrow agents but limiting for pair‑programmer‑style usage.
Alternatives and Developer Friction
- Multiple alternatives are mentioned (e.g., UTCP, YAML‑described MCP servers, custom protocols) aiming to call HTTP/CLI/WebSocket endpoints directly without bespoke MCP servers.
- One developer reports chronic frustration trying to build a simple MCP‑based CLI, concluding a plain REST API would have been simpler.
- Some argue MCP is “just a well‑structured prompt” and that for coding agents, a handful of direct tools (search, edit, refactor) plus editor/LSP integration are already highly effective.