You Can’t Patch Fast Enough Anymore
There was a time when patching felt like a strategy. In fact, it’s probably still the primary strategy that most engineering organizations use when it comes to software vulnerability mitigation.
It’s never been a perfect strategy, but it was a reasonable approximation of one, supported by mature tooling, processes, and audit standards. You found or monitored vulnerabilities, applied fixes, and reduced exposure. If you were disciplined, you could convince yourself you were staying ahead and reasonably safe from a large class of exploits.
That world is disappearing. Thanks to AI, patching alone should not make you sleep well at night anymore.
The Big Assumption That Broke
Traditional IT security thinking relied on a simple assumption: You can patch faster than attackers can exploit. It was never universally true, but it was often true enough (truthy, if you will). We need to stop for a minute and think about why:
Vulnerabilities took time and real effort to discover. In many cases, the “good guys” found them and quietly gave months-long warnings to vendors and maintainers prior to public disclosure. Exploits took time and (often) significant technical skills to develop. Attackers had resource constraints and therefore had to prioritize.
All this friction in the system meant lead time that made patching a justifiable security posture AI is removing that friction.
We’re moving toward an environment where vulnerabilities are discovered faster, exploits are generated faster, and targeting is increasingly automated. The gap between “some bug exists” and “this bug is being weaponized against you” is shrinking. Once that happens, the idea of “staying up-to-date on patches” starts to break down.
Even If You Do Everything Right
Often in my blog I address something like the “median engineering organization” and assert that while maybe a large best-of-class company can do such-and-such, your 10-person IT department probably can’t. That’s not true when it comes to patching post-2026-level-AI. Nobody in the industry can patch their way out of the AI exploit train coming down the tracks.
Let’s just assume you are already very good at managing vulnerabilities via patching. You have automated patch pipelines, strong dependency hygiene, aggressive SLAs, and you’re even using best-in-class tools like Chainguard to reduce exposure at the source.You’re doing everything “right.” It’s not enough anymore.
Attackers armed with advanced AI tools based on models like Mythos can not only find thousands of bugs but also generate full automated exploit chains, potentially generating zero-day exploits that go from nothing to active use in the wild in a matter of hours.
Think about that for a second – hours. Sophisticated software modules can’t even be built – let alone tested, documented, and deployed – in hours. Even if you have “white hat” AI tools fixing vulnerabilities as fast as they are found (I’ve seen some folks suggest this as a “solution”), the laws of physics still imply a dangerously long exploit window. And that doesn’t even consider nation-states who may pull ahead or fall behind in the frontier-AI race and control access to their latest developments.
Patching Is Now Just Table-Stakes Hygiene
None of this is to say that you should throw up your hands and stop your patching efforts. That would be silly. But it does mean you need to demote it in your mental model.
Patching is no longer a “strategy”. It’s basic table-stakes hygiene.
It’s just like locking your doors at night. It’s necessary but not sufficient against a determined attacker who comes to the door already knowing where the weak spots in your defenses are.
If your security posture depends on patching being your primary defense, you’re relying on that time advantage that is rapidly disappearing.
The Capability Gap
If you can’t rely on patching to get to your systems before the hackers do, you have to assume that, at some point, something somewhere in your IT environment will be exploited. That shifts the problem from prevention to containment. And this is where we start to see a real capability gap in the industry.
Most IT organizations are still optimized around patching. They track SLAs, measure time-to-remediation, and push teams to close CVEs faster. Audit frameworks reinforce the same behavior. None of that is wrong, but it reflects a world where speed meaningfully reduced risk.
The alternative is clear in theory: Move to zero-trust architecture, assume breach, and invest in containment. In practice, that’s not a simple shift.
“Zero trust” isn’t a feature that you turn on or off in your vendor software. It’s an architectural change. It requires reworking identity, access, and system boundaries across environments (many of which may be very old legacy systems) that were never designed for it. That’s a multi-year engineering effort, not a tooling upgrade or a config change.
And most teams aren’t staffed or trained for it. The average IT organization (indeed, the average IT career) is optimized for stability, not adversarial thinking. The skills required here -- threat modeling, identity design, detection engineering -- are specialized and scarce and therefore in-demand and expensive.
So while the industry talks about “zero trust” and “assume breach” as a baseline, most organizations aren’t structurally prepared to operate that way. There is a big gap between what the new threat environment demands and what teams can realistically deliver.
Detection and Response Are Even Harder
Once you accept that prevention isn’t enough, the focus shifts to detection and response. That sounds straightforward on a PPT slide. It isn’t.
Most organizations don’t actually know what “normal” looks like across their systems, which makes detecting meaningful anomalies difficult. Even when they do detect something, response is another problem: Who owns it, what gets shut down at what cost, and how quickly can action be taken without breaking the business?
“Assume breach” only works if you can answer those questions with confidence. Most teams can’t. In many IT organizations, incentives around “zero downtime” and “no disruptions” actively discourage the kinds of tradeoffs required for an active-defense posture. Those incentives are not “wrong” in many environments, but they create friction in the modern threat model.
So while active defense is necessary, it’s probably not something you can jump to directly. For most organizations, it’s a capability and perhaps a culture change that has to be built slowly on top of foundations that often aren’t there yet.
So What Do You Actually Do?
As engineering leaders, we need to bridge the gap between what is necessary and what is feasible.
On paper:
patching is insufficient
zero trust is required
assume breach is the baseline
In practice:
patching is already stretched to the breaking point
zero trust is partially implemented at best, unheard of in many organizations at worst
and active defense is aspirational
Some of this is going to be mitigated by tool improvements, probably using the same kinds of AI that attackers will be weaponizing. But expect that to be none of (cheap, simple, and sufficient). So the real strategic question becomes less about “doing security better” and more about:
“What are we structurally capable of defending?”
And the honest answer for most organizations is: not as much as they think. That leads to three unavoidable directions:
Reduce what you have to defend
Fewer systems, fewer bespoke applications, fewer places where complexity accumulates. You need to threat-model in advance. Maybe the threats you identify in some systems mean that those systems should never be built and deployed.Consolidate onto platforms that already operate at this level
Push responsibility toward providers who can actually invest in zero trust and continuous defense at scale. In my last post, I suggested this might be a good opportunity to rethink SaaS and buy-before-build.Be explicit about the capability gap and support upskilling
Much has been made about layoffs in the software engineering space. I’m quite certain the industry still has need to security engineers with training in modern techniques like zero-trust. We should be collectively upping our game in that area and use AI but not simply assume that “AI will solve this”.
Where This Leaves Us
In an AI-accelerated threat environment, patching isn’t a broken approach, but it no longer buys you time.
For years, security strategies implicitly depended on the time buffer and the assumption that vulnerabilities would exist for a while before they were reliably exploited. That assumption allowed patching to function as a primary control.
That buffer is collapsing. And when it disappears, the KPI is no longer how fast you can remediate vulnerabilities. It’s how much exposure you carry while they exist, which, in practice, means all the time.
That shifts the discussion around security decisions. Not to trying harder to create “perfect” systems because that’s as impossible as ever but rather toward:
reducing how much you have to defend
limiting how far an attacker can move around your infrastructure
and building systems that continue to operate under partial compromise
Most organizations are not structured for that. Their tools, metrics, and teams are still optimized for a world where faster patching meaningfully reduced risk. Changing that is expensive and slow, and it forces uncomfortable tradeoffs about what not to build, what not to run, and what not to own. But that’s the new game, because if patching no longer buys you time, then time is no longer the thing you can rely on.
Security isn’t a patching race anymore. It’s an ongoing systems survival quest.
If you’re thinking about this right now
If you’re a CTO, CIO, or engineering leader trying to figure out:
whether your current security posture is overly dependent on patching
how exposed your systems are in a faster exploit environment
what you can realistically defend vs what you probably shouldn’t be running
or how to prioritize between patching, architecture changes, and detection
this is exactly the set of tradeoffs I can help you think through.
I am available to work with teams on both the advisory side and in execution — helping map real exposure, identify where effort actually reduces risk, and make practical decisions about what to change, what to rebuild, and what not to build in the first place.
If a few focused conversations would help clarify where your biggest risks and leverage points are, feel free to reach out.