ChatGPT Developer Mode: Full MCP client access
What Developer Mode / MCP Support Provides
- Thread agrees this is effectively “full MCP client support” in ChatGPT, not a coding mode.
- Users can connect arbitrary MCP servers, including write-capable tools, via a hidden “Developer mode” toggle.
- Some confusion about whether this is for the web chatbot vs CLI; clarified as the main ChatGPT UI on the web, limited to Plus/Pro (not Team).
Early Technical Friction and Limitations
- Several reports of OAuth/connector failures when attaching existing MCP servers that work fine with other clients (Claude, LM Studio, etc.).
- Suspected causes include protocol differences (SSE vs HTTP streaming) and strict response validation.
- OpenAI’s Deep Research requires specific tools (“search”/“fetch”), so some MCPs are rejected as non‑compliant, which feels at odds with MCP’s generic design.
Ecosystem Tools and Use Cases
- People are building MCP gateways/control planes and “meta‑MCP” servers that bundle many tools behind a simple
search/executeinterface to reduce context pollution. - Concrete use cases mentioned:
- Replacing internal admin UIs with MCP tools over existing REST APIs.
- Browser automation and UI testing (Playwright MCP, Storybook verification).
- Personal workflows like finding fencing classes then writing to a calendar.
- GitHub issue fixing, Home Assistant control, storage access (S3/SFTP/etc.), multi‑LLM “consensus” tools, card creation (Anki).
Security, Prompt Injection, and the Lethal Trifecta
- Large subthread on risks when LLMs have: (1) access to secrets, (2) access to untrusted data, and (3) an exfiltration channel.
- Core point: to the model, “instructions” from a web page, email, or log look similar to instructions from the user; so untrusted content can redirect the agent (e.g., leaking secrets via crafted URLs or triggering destructive commands).
- Role metadata and structured/constrained generation help but don’t offer hard guarantees; 99% robustness is framed as unacceptable for security.
- Attempts to filter “prompts” with another model are criticized as brittle and inherently cat‑and‑mouse.
Enterprise and Governance Concerns
- Worry that mainstream ChatGPT users will enable dangerous MCPs without understanding prompt injection or blast radius.
- Calls for strong auth, scoping, org‑level policies, and sandboxing (dev containers, no API keys, local-only tools).
- Others argue MCP is already common (e.g., Claude desktop, GPT Actions) and that over‑focusing on MCP obscures broader supply‑chain and agent‑security issues.
Comparisons and Overall Sentiment
- Many welcome OpenAI “finally” matching Claude’s MCP capabilities, but see ChatGPT’s implementation as less polished (no true local MCP in desktop, no mobile support yet).
- Some think the danger is overstated if tools are read‑only or tightly sandboxed; others see this as a major new attack surface released with only warnings and user checkboxes.