Everything wrong with MCP
Security, Authentication & Trust
- Heated debate over MCP shipping without built‑in auth: some see it as inexcusable (“you can’t bolt on security later”), others argue transport‑level auth (TLS, HTTP auth, mTLS, PAM for stdio) is sufficient and better standardized anyway.
- Real gap identified is authZ propagation and multi‑tenant scenarios: how to pass user‑level permissions through MCP without going via the LLM, and without exposing e.g. a whole company’s Google Drive to all chat users.
- An OAuth‑style authorization RFC is in progress, with contributors from multiple major identity vendors; people see this as promising but very early.
- Many comments stress that untrusted remote MCP servers are dangerous: they can run arbitrary local code, exfiltrate data, or escalate via prompt/tool injection—similar in spirit to VSCode extensions, NPM packages or SQL injection.
- Others push back that this is mostly a usage/hosting problem (sandboxing, least privilege, local vs remote deployment), not something MCP alone can solve.
MCP vs APIs, OpenAPI, and CLIs
- A recurring question: “Why not just use HTTP + OpenAPI?” Critics call MCP a redundant, NIH reimplementation and note LLMs can already consume OpenAPI specs or docs directly.
- Pro‑MCP responses:
- MCP is itself an API spec, but oriented to LLM tool‑calling: standard shapes for tools, resources, prompts, progress, cancellation, etc.
- It lets generic clients (Claude Desktop, code editors, other agent frameworks) talk to arbitrary tools without each app redefining integration glue.
- It covers non‑HTTP things (local CLIs, databases, hardware) via stdio, which OpenAPI alone does not.
- Some argue a clever CLI + help text is often enough; others counter that MCP provides a consistent machine‑readable layer for many such tools.
Dynamic Tools, Context Limits & Injection
- Disagreement over whether MCP tools are “static”: the spec supports dynamic tool lists via notifications, but current clients often make adding/removing servers awkward.
- Several commenters emphasize a fundamental scaling issue: every tool definition consumes context; many servers/tools can degrade LLM reliability, increase cost, and create more cross‑tool interference and injection surface.
- Experiments and security writeups show “tool description/resource poisoning” and cross‑server prompt injection are real, especially since current clients don’t sandbox tools from each other.
Maturity, Ecosystem & Hype
- MCP is only a few months old; many see its flaws (security, streaming limitations, weak typing/return schemas, no built‑in cost controls) as expected in v1 and fixable over time.
- Others think it’s a rushed, over‑marketed “framework in protocol clothing” that mainly serves big LLM providers by centralizing tool ecosystems and creating a new moat.
- Actual usage exists (Claude Desktop extensions, code agents, custom servers for storage, databases, hardware), but user reports are mixed: power users find value, non‑experts often find it confusing or underwhelming.
Broader Agent & UX Concerns
- A number of criticisms are really about autonomous agents, not MCP specifically: over‑trusted models, dangerous default behaviors, and lack of good UIs to inspect/approve actions.
- Some argue general chatbots may not be the long‑term interface; specialized apps with their own tooling might matter more, making MCP mainly a niche glue layer for chat‑style clients.