Abdication and Resistance Are Two Bad Takes on AI Coding

The debate around AI coding tools is being driven by two bad takes which I refer to as “abdication” and “resistance.”

On one side are engineers who have handed over thinking, design, judgment, and pretty much everything else coding-related to AI and call it “progress.” On the other are engineers who reject the tools entirely and treat any productivity gain as suspect. Both camps mistake ideology for engineering. And both miss the point of how AI actually fits into professional software development.

It’s worth saying up front that neither abdication nor resistance usually comes from individual incompetence. In practice, both are often justified responses to organizational incentives. Teams rewarded primarily for visible output (the vaunted“25% productivity increase from AI”) drift toward abdication. Teams rewarded primarily for stability and risk avoidance (“no production outages at all costs”) drift toward resistance. Once those patterns take hold, they start to look like personal convictions, when they’re really the result of how engineering and line-level engineering management work is evaluated.

Let’s discuss.

Camp 1: Abdicators

The Abdicators believe that AI will make software engineers (and especially the so-called “junior devs”) obsolete. These folks have started to generate most or all of their code using AI, hype up “vibe coding”, and often double down with a heavy dose of marketecture about “prompting, not coding”. (And conveniently, many of them are selling AI tooling, courses, or content at the same time.)

But here’s where the problems begin:

But here’s where the problems begin:

  • If you step back from the magic and admit to yourself how LLMs actually work, AI-based coding tools guess plausible code based on patterns, not correct code based on an understanding of the architecture or requirements in the context of your project. This is a fundamental fact of how the technology works, not something that somebody’s hot AI coding startup can fix. Here’s a fun rant on this topic.

  • Empirical evidence suggests that AI often creates duplicated or bloated code, ignoring decades of best practices around the DRY principle and maintainability.

  • Security analysis shows a high occurrence of potential vulnerabilities, where “functional” isn’t the same as “secure or correct in the real world”.

Lightweight prototyping or “MVPs” are one thing, but when experienced engineers are taking the output of a plausible-text generator and sending the results to production with only a cursory review, they are outsourcing their engineering judgement. And when they do this in the name of “productivity” it’s incredibly short-sighted given the amount of tech debt being accumulated.

What makes abdication dangerous is that it often looks rational under real-world pressures. especially when speed is rewarded and long-term costs are invisible (and teams are perhaps under executive orders to “leverage AI to improve productivity”).

Camp 2: Resisters

At the other extreme are the engineers that claim that AI coding tools slow them down and therefore must be fundamentally flawed and avoided at all costs.

This is a baffling position. Used even halfway competently, AI coding tools are provably faster for a wide range of tasks, including:

  • Generating boilerplate or other highly precedented code

  • Exploring unfamiliar APIs, especially when augmented with agents that read structured specifications

  • Translating between languages or frameworks or doing version upgrades

  • Drafting tests, mocks, fixtures, and documentation

  • Refactoring mechanical or repetitive code

  • Acting as a sounding board for new design and architecture ideas

If an experienced engineer says AI coding tools are making them slower, it almost always means:

  • They are using the tool at the wrong level of abstraction

  • They are fighting it instead of collaborating with it

  • They are secretly (and unnecessarily) worried about what widespread AI adoption means for their role and identity as engineers

In other words, the problem isn’t the AI coding tool, but rather how the engineer is integrating it into their workflow.

Resisters often express their resistance as engineering professionalism or rigor. In fact, it’s usually unfamiliarity, misplaced expectations, or unwillingness to invest in a new way of working.

To be fair, poorly integrated AI tools do introduce real cognitive overhead. But that’s an argument for better integration and training (and maybe procuring AI coding tools with a better UX), not wholesale rejection.

Bonus Camp: Silent Majority

Of course, both the Abdicators and the Resisters tell compelling stories that generate a lot of news reports and online debate. What’s missing is what I’d guess is the largest group of all: Engineers who are pragmatically integrating AI into their workflows and not talking as much about it.

These engineers aren’t making LinkedIn posts about the “end of programming,” nor are they posting angry Reddit threads about Copilot ruining their “flow”. They’re doing mundane, professional things daily, such as:

  • Letting AI draft code they already understand

  • Using it to explore unfamiliar libraries, then taking back control

  • Using AI output as a source of useful information, but throwing away as much AI generated code than they keep

You don’t hear from this group much because they’re not selling tools, building personal brands, or defending their professional identity as “real engineers.” They’re just, well, doing work that is simultaneously productive and reliable.

This quiet, pragmatic approach is also the one most aligned with how professional engineering has always worked: New tools get absorbed, disciplined, and normalized -- not idolized or rejected outright.

So What Actually Needs Fixing?

The fact is that most organizations won’t have an AI tooling problem. They will have an engineering judgment and process problem that AI coding tools will amplify.

Abdication produces codebases that look productive on the surface but are brittle, over-abstracted, poorly understood, and quietly unsafe. Resistance produces teams that fall behind, struggle to hire, and confuse inertia with engineering rigor. In both cases, the consequences tend to show up months later when velocity collapses, defect rates climb, or nobody is quite sure how the system actually works anymore.

At that point, AI stops being a novelty and becomes a leadership problem.

In my work over the years, I’ve spent a lot of time helping teams recover from the downstream effects of bad engineering decisions like unmaintainable systems, architectural drift, tech debt, and process breakdowns that only become visible once things start to fail in production. AI-assisted coding doesn’t create a new class of failure so much as it exacerbates familiar ones, and that makes the cleanup both more urgent and more subtle.

I’m increasingly focusing my work on exactly this intersection: helping teams assess where AI is helping, where it’s quietly harming, and how to restore engineering ownership without throwing away the genuine productivity gains these amazing tools can offer.

If you’re leading an engineering organization and recognize pieces of your team in any of the camps above, that’s not a sign you or your team have failed. It’s a sign your tooling has moved faster than your operational processes.

If you’re responsible for the health of an engineering organization and are unsure whether AI is helping, hurting, or quietly doing both, I’m open to exploratory conversations. Sometimes all that’s needed is an external perspective to surface problems early before they turn into something expensive.

Next
Next

The Elusive “AI Business Model” Is Just B2B SaaS