You Don’t Get to Ignore Politics in AI Coding Tools
AI-generated code isn’t neutral. Sometimes it isn’t even secure. And if you’re relying on it, you may be inheriting someone else’s politics along with their bugs.
The DeepSeek Example
Last week in the news we were treated to research by CrowdStrike, which revealed that DeepSeek, China’s top AI engine, may deliver less secure code to developers and/or projects linked with “politically sensitive" groups. The Washington Post reports that prompts mentioning Falun Gong, Taiwan, or Tibet triggered flawed or rejected responses.
STEM Was Supposed to Be Apolitical
I don’t think any of us got into STEM fields to have to maintain an awareness of politics, and in fact maybe getting into STEM was an attempt to avoid politics for some. Nonetheless, it should have been obvious since at least the Cambridge Analytica scandal that we as software professionals don’t have that luxury anymore.
Hidden Bias in AI Coding Tools
Since a lot of people in my network use AI coding tools — and especially since a surprisingly large number of people in my network are building them as well — I came here today to point out:
Software developers should be aware: Open-source AI tools may quietly embed geopolitical bias into code you rely on. And to be clear, the kind of geopolitical bias we’re talking about here means code that is strategically more subject to security issues.
Not Just Essays — Also Code
The fact that DeepSeek has crystal-clear political bias baked directly into the model weights should surprise absolutely nobody by now. If I were a college student writing a term paper on Falun Gong and using a DeepSeek-backed proofreading tool, I’d be skeptical of the output, knowing the model might be grounded in state propaganda and adjust my strategy accordingly.
On the other hand, the fact that this same dynamic may also come into play not just with college term papers but also with code probably comes as a surprise to many software developers. Hence the interesting in the CrowdStrike findings.
Code Isn’t Neutral
For software developers, there’s a strong tendency to look at code as pretty “neutral” when in fact, as we can see here, it may not be. When you combine that dynamic with trends like “vibe coding,” where you can’t even count on AI-generated code having been subjected to even the most cursory of code reviews, you can see the risks that might result.
Furthermore, although DeepSeek is just the most blindingly obvious (and often silly) example, given the general direction of world affairs I think it’s pretty likely we’ll see similar biases seeping into topics that other governments and politicians find “sensitive” for whatever reason. And if you’ve read this far, you’re now aware that this can turn into insecure code that can be exploited by organs of said governments and politicians (as well as your random everyday hackers).
Two Proposals
So what are software professionals to do about this? I have two modest proposals.
First, if you’re building an AI coding tool, be transparent about the model(s) you’re using. Second, make it easy for your users to swap to a different model — including in-house models, which many security-conscious enterprises would prefer to deploy anyway (which makes this a good business decision as well).
Fortunately, a lot of the tools I’ve seen are doing this already. But product owners need to remain vigilant, especially as we see more and more code being processed and reviewed by AI in use cases that are less transparent and interactive than a typical IDE copilot.
Closing Thought
At the end of the day, this isn’t about fear mongering. It’s about professional responsibility. Developers don’t get to ignore politics in AI coding tools. The choice is to acknowledge it now, or to deal with the security fallout later.