Your AI Screener Might Be Biased
Last week, I used ChatGPT to do an initial screen of a community manager's resume, and it gave a "strongly recommend." Later, I found out that resume was written by AI. I was stunned.
We're increasingly using AI to help review resumes, screen proposals, and pick vendors. But a Stanford team just published a paper testing mainstream models like GPT-4, Claude, and Gemini, and found a harsh truth: AI consistently favors AI-generated resumes, regardless of content quality. That means if your proposal is AI-written and the client uses AI to screen, it's more likely to get picked—not because it's better, but because it "smells right."
Who Is Already Facing This
My friend Xiaochen was pitching freelance projects on Upwork last year and specifically used AI to polish his proposals—because he noticed many employers also use AI for initial screening, and AI-written proposals pass more easily. It's a weird "AI-to-AI" echo chamber. But on the flip side, if you insist on writing yourself, your genuine voice might get screened out by AI. I messed this up before: I was helping Xiaochen screen a remote designer application and asked Claude to rank them. The intro Claude scored highest turned out to be clearly AI-polished. I didn't realize it was bias at the time; I just thought they were a good writer.
Cost to Verify Today
If you're curious whether your AI has this bias, you can test it yourself:
- Money: $0 (just use your existing AI account)
- Time: 20 minutes
- Technical barrier: Just know how to copy and paste text into a chat box, no code needed
- First step: Take two real resumes (one you wrote yourself, one with the same content rewritten by AI), drop them into the same AI, and ask "which one do you recommend?" See what it picks.
Not everyone needs this tool right now. If you don't use AI to screen anything at all, just keep this as background info—no need to rush into anything. It's fine if you don't test it today.
Advice by Stage
Just starting out: If you aren't using AI screening yet, this bias doesn't affect you for now. But if you start using AI to write collaboration proposals, knowing the other side might use AI to screen you is an info advantage—polishing your pitch isn't something to be ashamed of.
With 1-2 clients: Next time you screen someone, don't just let AI rank them. I use AI scores as a reference and go over them myself again. My current practice is after AI screening, I randomly pull 3 "low-score" ones to check, preventing missing out on good resumes written by real humans that "don't taste like AI."
Scaling up: If your team is already using AI for initial screening, I'd suggest adding a manual calibration step—cross-reference AI scores with human scores, at least for one round to see how big the gap is. I got stuck here too: we once trusted AI sorting completely, and only realized the problem when we noticed applicants with the same "style" kept passing.