Zig language founder Andrew Kelley asserted this week: programmers who use AI-assisted coding carry a distinct "digital smell" that human reviewers can spot in a second.

What this is

Andrew Kelley points out that people mistakenly believe it is impossible to distinguish who is using LLMs (Large Language Models) to write code. However, AI-induced hallucination errors are fundamentally different from human errors. Developers who rely on Agents (AI programs capable of autonomously executing tasks) to write code carry a "digital smell" they themselves don't notice but others spot at a glance, much like how non-smokers can immediately smell smokers. His stance is clear: you can use AI, but don't submit AI-generated PRs (Pull Requests) to my open source projects.

Industry view

We note that resistance within the open source community to AI-generated code is rising. While some developers believe AI boosts output efficiency, project maintainers see a flood of low-quality code that drastically increases review costs. What concerns us is that when AI-generated code is blindly merged, it plants hidden dangers that are difficult to trace. Some argue that as models evolve, AI code will become harder to distinguish in the future, and Kelley's "olfactory advantage" won't last; but the current reality is: open source maintainers are building defenses, subjecting AI code to stricter reviews or even blanket rejections.

Impact on regular people

For enterprise IT: blindly introducing AI coding tools may accumulate technical debt, and code review processes must shift from "checking functionality" to "hunting for AI hallucinations." For individual careers: using AI to write code does not equate to skill improvement; "covering up AI traces" is becoming a new form of internal friction for some programmers. For the consumer market: the trust label of open source software may change, and in the future, "100% human-written" might become a safety selling point for certain software.