OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI
What “skills” are
- Described as small, self-contained bundles: a
SKILL.mdwith frontmatter (name + description) plus optional reference docs and scripts. - On session start, coding agents scan skills folders and inject only the short descriptions into the system prompt; full content is lazily read when relevant.
- Many commenters frame this as “dynamic prompt/context extension” or “context-management for tasks,” often analogous to sub-agents or English header files.
Usage patterns and benefits
- Common uses: project-specific coding help, debugging flows, CI/build result retrieval, document editing, front-end design, generating charts via Python, and browser/RE tooling (Playwright, Ghidra).
- People like being able to codify repeatable workflows (“next time, just do this”) and keep them out of the main context until needed.
- A recurring pattern is having the LLM write or update skills itself, then lightly editing them. Teams see potential for shared skill libraries encoding house style, APIs, and processes.
Implementation & ecosystem
- Skills are supported in Codex CLI and ChatGPT’s code environment; Claude Code pioneered the pattern; Gemini and other tools are adding equivalents.
- Local LLMs can also drive skills if they have shell/file access and enough context for long tool-calling loops.
- Comparisons to MCP: MCP exposes big tool catalogs up front; skills are lighter, pay-per-use, and often built atop CLIs. Some see skills as a better default for many use cases, with MCP reserved for richer RPC-style integrations.
Simplicity, prior art, and complexity fatigue
- Several argue skills are just formalized prompt stuffing or “documentation for the AI,” not a fundamentally new invention; others counter that the specific packaging + lazy loading is a meaningful UX/architecture win.
- Some feel overwhelmed by yet another layer (agents.md, MCP, skills, AGENTS.md), while others praise skills as the simplest way so far to extend coding agents.
AGI and intelligence debate
- Long subthread debates whether developments like skills show we’re far from AGI (we’re hand-writing “library functions” in English), or whether LLMs already qualify as a form of AGI by technical definitions of “general intelligence.”
- Discussion covers benchmark overfitting, Goodhart’s law, human vs machine “understanding,” and whether “real intelligence” is even definable in a non-circular way.
Vendor strategies and concerns
- Anthropic is praised for “obvious in hindsight” abstractions (MCP, skills, Claude Code) and coherent framing; OpenAI is seen as quietly following with massive distribution.
- Some want standardized, cross-vendor skills (tied to AGENTS.md / Agentic AI Foundation); others note security pitfalls (especially with MCP) and the risk of misuse.
- A separate warning highlights ChatGPT’s effective input cap being lower than the advertised context window, causing silent prompt truncation.