MCP is a fad
Perceived role and value of MCP
- Many see MCP as a small, boring integration layer: a standardized way for agents to call tools and resources, especially across different AI clients (Claude, ChatGPT, IDEs).
- Supporters emphasize interoperability and “write once, use in many agents,” likening it to LSP or USB for AI tools rather than just a local scripting mechanism.
- Critics argue the article focuses too much on local filesystem use and misses broader agent-to-service scenarios, including async and long‑running operations, generative UI, and SaaS integration.
Comparison with Skills, CLIs, and OpenAPI/HTTP
- Several argue Claude Skills (markdown + front matter) are simpler and often sufficient; some think useful commands/docs should live in human‑oriented files and be “taught” to the AI, not moved into AI‑specific configs.
- A recurring claim: almost everything MCP does can be done with CLIs plus a shell and tools like
just/make, or via existing HTTP/OpenAPI APIs. - Others counter that MCP’s structured tool schemas, resource mounting, and stateful handles provide more predictable, testable flows than agents dynamically generating glue code or scripts.
Security, lifecycle, and operational concerns
- Strong skepticism around security: MCP is seen as an easy data‑exfiltration vector, especially if people casually add third‑party servers.
- Some argue MCP is “just the protocol” and security is an implementation concern; others reply that in practice bad ops and weak curation are common, so the risk is real.
- Process lifetime and resource usage are highlighted: one‑process‑per‑server can lead to many heavy apps idling, especially with multiple coding agents.
- There is debate over whether MCP meaningfully improves sandboxing vs. running tools in containers/VMs or via safer gateways.
Interoperability, auth, and enterprise use
- Pro‑MCP voices stress OAuth-based auth, auditability, permission prompts, and approval workflows as key for exposing enterprise SaaS/APIs to agents (including web UIs like ChatGPT/Claude).
- Others ask why not just expose OpenAPI specs and treat AI calls as normal RPC, avoiding a parallel ecosystem.
Broader AI‑for‑coding and “fad” discourse
- Thread widens into whether AI coding and tool‑calling are fads: some report disastrous experiences and see LLMs as wasteful slop generators; others say latest models, used well, dramatically speed up meaningful work.
- There’s tension between those prioritizing code quality and domain expertise vs. those emphasizing speed, delegation, and acceptance of “good enough” outputs.