What Happened

Three significant AI industry events occurred in the week of March 31–April 5. First, researcher Chaofan Shou discovered that Anthropic's Claude Code npm package version 2.1.88 contained an undeleted 59.8MB source map file, exposing 510,000 lines of code across 4,756 files. The leak revealed unannounced features including KAIROS (a persistent background daemon for memory and task planning), BUDDY (a virtual pet system with 18 species and a sarcasm meter), and a stealth mode designed to hide AI contributions to open-source projects. Anthropic issued DMCA takedowns targeting 8,000 accounts, incorrectly flagging innocent developers. Second, OpenAI announced a $122 billion funding round on April 1, raising its valuation to $852 billion. Amazon led with $50 billion (of which $35 billion is conditional on an IPO or AGI milestone by end of 2026), with Nvidia and SoftBank each contributing $30 billion. OpenAI reported 900 million weekly active users, 50 million paid subscribers, and $13.1 billion in 2025 revenue. Third, China's Ministry of Industry and Information Technology and nine other departments jointly issued the AI Science and Technology Ethics Review and Service Measures (Trial) on April 2, formalizing ethics review requirements across the full AI development lifecycle.

Why It Matters

For indie developers and SMEs, the Anthropic leak is a concrete reminder that source maps must be excluded from production npm packages—a basic but frequently skipped build step. The OpenAI funding round signals continued concentration of AI infrastructure investment among a handful of hyperscalers, meaning API pricing and model access policies will increasingly be shaped by Amazon, Nvidia, and SoftBank interests rather than developer community feedback. The Chinese ethics framework introduces mandatory review checkpoints covering training data selection, algorithmic bias prevention, and disclosure of model purpose—requirements that will directly affect any team building AI products for the Chinese market.

Asia-Pacific Angle

Chinese and Southeast Asian developers targeting global markets face a dual compliance reality. China's new ethics rules require documented review of training data standards, bias controls, and algorithmic transparency before deployment—teams building for domestic Chinese users need to audit their data pipelines and model cards now. For those expanding into Southeast Asia, the Chinese framework is likely to influence similar regulations in Vietnam, Thailand, and Indonesia, where AI governance drafts are already referencing Chinese and EU models. The OpenAI round's conditional AGI clause tied to Amazon's $35 billion is also relevant: it signals that major cloud providers are structuring AI investments around milestone-based governance, a pattern that regional cloud providers like Alibaba Cloud and Tencent Cloud may replicate in their own AI partner agreements.

Action Item This Week

Audit your npm or PyPI package build pipeline today: add an explicit .npmignore or files whitelist in package.json to block *.map, *.ts source files, and any debug artifacts from being published. Run npm pack --dry-run and inspect the output before your next release.