What This Is

Cangjie Skill is an open-source workflow project built on a single core insight: instead of dumping an entire book directly into a large language model (LLM — the engine underneath tools like ChatGPT), you first have a human break the book's decision frameworks and core principles into standard ized structured files, then mount those files for the AI to use on demand. The project already ships with pre-processed extracts from books including Warren Buffett's Letters to Shareholders and No Rules Rules (the Netflix culture book).

The problem it targets is real. Feed a 300,000-character book straight into an AI and the model tends to anchor on the opening and closing sections while effectively ignoring the middle — a well-documented failure mode the industry calls the "lost-in-the-middle" problem. Cangjie Skill's answer is to slice the book into discrete "skill modules" (SKILL.md files), each carrying a trigger scenario and an execution logic, then connect them to AI tools via MCP (Model Context Protocol — a standard interface that lets AI tools read external files). The AI can then pull the relevant module as needed during a conversation.

Method ologically, the project adapts the "RIA Reading Breakdown" method developed by Taiwanese educator Zhao Zhou — a three-step read/interpret /apply framework — and extends it with a "triple verification" layer (cross-domain corroboration, predictive power , non-obviousness) plus AI execution steps. The result shifts the output from human-readable notes into machine-executable instructions.

Industry View

The project's direction aligns with a trend that is actively taking shape across the industry: rather than trying to make AI remember more, give AI a better external tool library to draw from. Heavy users of knowledge- management tools like Notion and Obsidian have visibly increased their discussion of similar workflows over the past year, and "struct uring a personal knowledge base before connecting it to AI" has migrated from geek circles into the daily practice of a meaningful slice of knowledge workers .

But the counterarguments are equally direct. First, the labor cost of this workflow is extremely high. The project itself cannot automate the "turn a book into a skill pack" step — that core stage still depends entirely on human judgment and decomposition, and processing a single book end-to-end can easily consume dozens of hours. Second, the quality of the output depends heavily on how deeply the operator actually understood the source material; the resulting "skill pack" may amount to nothing more than reading notes reformatted, with no genuine filtering of method ologies that hold up under scrutiny. Third, as leading models now routinely support context windows of 100,000 characters or more, the urgency of the "lost-in-the-middle" problem is itself being eroded by raw engineering progress — making the core pain point Cangjie Skill addresses increasingly contestable.

Our judgment: this project reads more like a thinking framework worth borrowing than a production-ready productivity tool you can deploy out of the box.

Impact on Regular People

For enterprise IT: If an organization is sitting on large volumes of unstructured policy documents or training materials, the underlying logic of "structure first, then connect to AI" is worth studying. That said, teams should rigorously assess whether the human labor investment penc ils out. For now, this approach is better suited to a contained pilot than a broad rollout.

For individual professionals: Knowledge workers who have spent years building deep expertise in a specific domain will likely get more practical value from organizing their own accumulated methodologies into structured documents and connecting those to AI — rather than reaching for someone else's pre-packaged "Munger skill pack." The latter looks appealing, but a framework extracted by someone else may simply not map onto your actual work context.

For the consumer market: The project remains squarely in technical-community territory and requires a meaningful level of hands-on capability. If a product studio wraps this into an "AI reading assistant" consumer app, there will be short-term market appetite — but the moat is thin. The underlying functionality is straightforward to replicate.