What this is

Anthropic this week revealed the underlying architecture of the Claude Code CLI, which has at least 6 command sources—indicating that AI coding tools are shifting from "omnipotent conversation" to "controlled modular workflows." The source code shows that Claude Code splits slash commands into three types: Prompt commands (executed by AI, but with strictly limited available tools), Local commands (executed locally, not invoking AI), and Local JSX commands (popping up interactive interfaces).

More worthy of our attention are its command sources. Beyond built-in commands, users can write Markdown files in directories to define new skills; MCP (the standard protocol for connecting AI to external tools) servers can also dynamically inject new commands. At the execution level, it introduces a Fork mode—complex tasks launch a sub-Agent (an AI entity capable of autonomously planning and executing tasks), independently consuming a Token (the billing unit for AI processing text) budget, no longer squeezing the context window of the main dialogue. This is no longer a "Q&A" chat box, but a scheduling system with pluggable modules, clear permissions, and the ability to dispatch subtasks.

Industry view

We judge that the core value of this architecture lies in "security and controllability." When the /commit command is strictly restricted to git operations via allowedTools, the risk of AI recklessly executing scripts and deleting repositories is blocked. The Fork mode is an engineering solution to the current AI context length bottleneck—chopping complex tasks into pieces and handing them to independent sub-agents.

However, opposing voices also exist. Some developers point out that MCP's dynamic registration and Skills' custom capabilities actually tear open a new attack surface at the system's foundation. If a malicious MCP server injects a command with excessive permissions, the original security defense line might become virtually useless. Furthermore, over-reliance on the Fork mode may complicate simple problems, making it easy for developers to fall into a script hell of "building blocks for the sake of building blocks," thereby losing the advantage of flexible error correction by large models.

Impact on regular people

For enterprise IT: Fine-grained permission controls like allowedTools provide the security needed to deploy AI agents within corporate intranets, meaning IT departments can finally operate with confidence.

For individual careers: The competitiveness of programmers is shifting from "writing specific code" to "defining workflows and skill modules for AI." Prompt engineering is upgrading into workflow engineering.

For the consumer market: This interaction model of "slash commands + modular execution" is highly likely to spread from coding tools to office software. Ordinary people using AI will act more like they are making precise menu selections, rather than chatting aimlessly.