SaaS Isn’t Dead. It’s Just Getting Started.

There are two big narratives bouncing around the tech industry (plus governments and financial markets) right now:

The first is that SaaS is dead. AI means your business and product folks can now generate software on demand. So why take on an ongoing operational expense to rent someone else’s product when you can just build your own with a few LLM prompts?

The second is that AI is about to make cybersecurity dramatically worse. Models like Anthropic Mythos are good enough to find vulnerabilities, generate exploit pipelines end-to-end, and compress the time between “bug exists” and “bug is weaponized” down to something close to zero, or at least something no normal patch cycle can keep up with.

Both of these narratives have some truth to them. But put them together, and people are drawing exactly the wrong conclusion. If anything, they imply the opposite of the conventional wisdom:

SaaS isn’t dying. It’s about to become mandatory.

The Part Everyone Is Getting Backwards

The “build it yourself with AI” story focuses almost entirely on how cheap and easy it’s becoming to produce code.

And that part is real. I’m here to vouch that code generation is no longer a bottleneck. You can go from idea to working prototype in an afternoon. Internal tools that used to take months now take days. You can plow lines-of-code into your repository at an exponentially growing rate. (Note: You should have started to get worried by at least that last sentence.)

Believe it or not, cranking out code was never the hard part. The hard part is everything that happens after the code works (for some definition of “works”).

Running software securely, reliably, and under constant attack is a different ballgame. And it’s one that most organizations were already bad at before AI started blowing up your codebase while at the same time making attacks faster and cheaper.

Now take that second narrative seriously for a minute. If the exploit window is collapsing, if attackers can automate discovery, targeting, and exploitation, then the environment you’re deploying your more-and-more-lines-of-code into just became far more hostile.

That’s a dangerous conclusion, based on a misunderstanding of what actually breaks most organizations.

The 10x Engineer Problem, Revisited

At some point it became unfashionable (and maybe a little politically incorrect) to talk about “10x engineers.” But the core observation -- that the distribution of talent is not flat -- is true and relevant here. Let’s apply it to cybersecurity.

The gap between an average engineer and a genuinely elite security engineer is not 10%. It’s not even 2x. It is (here we go!) an order-of-magnitude difference in intuition, experience, and the ability to anticipate how systems fail under pressure.

And here’s where things really start to get uncomfortable: A regional bank CEO in the Midwest, a hospital administrator, or an insurance executive will look at their HR situation and think:

“We pay 15% above the local market for systems analysts. We hire top people. We’re fine.”

Sadly, not fine.

Whether the CEO realizes it or not, they are competing with organizations that can hire from a global talent pool, pay multiples of your salary bands (where one engineer can cost more than the entire IT budget), and give engineers problems that attract the very best people in the field. And I haven’t even brought up the possibility of nation-state actors yet.

The people defending most organizations’ systems are not playing in the same league as the people building offensive capabilities at scale. They are doing the cybersecurity equivalent of asking your local high school 3A team to guard the rim against the Lakers.

And AI is about to widen that gap, not close it.

You Are Not Defending Against Your Peers

This is the part that’s easiest to ignore.

Most organizations implicitly assume that their threat model looks like them in terms of resources and level of sophistication. That hasn’t been true for a while.

Today everyone is defending against:

  • financially motivated groups running highly professional operations

  • well-resourced criminal networks

  • and in some cases, APTs with nation-state backing

These people do not have your constraints. They do not have your hiring pipeline. They do not have your budget approvals or Workday tickets or compliance committees with audit standards.

And increasingly, thanks to AI, they will have better tooling.

So when you decide to build and run your own software stack, what you’re really deciding is:

“We’re going to take direct responsibility for defending this system against adversaries who are better resourced than we are.”

Most organizations do not say that part out loud. Unfortunately, many of them don’t even realize it.

SaaS as a Security Primitive

AI as an exploit tool means we need to reframe the SaaS conversation.

SaaS is not just about convenience or faster deployment or cost tradeoffs anymore. It’s about concentrating defensive capability in the hands of people better equipped to handle it.

A serious SaaS provider can justify dedicated security teams, continuous monitoring, and infrastructure designed for adversarial conditions -- not because they’re altruistic, but because their business depends on it.

More importantly, they can amortize that investment across hundreds or thousands of customers. You and your in-house, vibe-coded software suite cannot.

Yes, SaaS concentrates risk. When a major provider fails, it fails loudly and at scale. That’s a real tradeoff. But the alternative isn’t “no risk.” It’s quietly accumulating unmanaged risk in systems you don’t have the capability to defend.

When an average enterprise builds and deploys an internal system, they’re not just “generating code”. They’re taking on the obligation to defend it indefinitely against increasingly capable attackers.

And we’ve all seen how that usually ends: an internal tool that becomes critical, sits behind a VPN for remote access, is lightly monitored and rarely patched, and remains invisible right up until someone finds it.

The difference now is speed. What used to take months to discover and exploit will increasingly take days or hours with the help of AI.

The Coming Mismatch

AI is going to make two things happen at once: First, as I mentioned earlier, the volume of software will explode. Second, the baseline level of attack sophistication will rise.

That combination isn’t unrelated, and it isn’t neutral. It creates a widening mismatch between what organizations are responsible for and what they are actually capable of defending.

The top tier of companies will adapt. They’ll hire those elite-compensation-level security engineers. They’ll integrate AI into their security workflows, improve their defenses, and pull further ahead of the median organization.

Everyone else will accumulate risk faster than they realize until one day soon it blows up in their faces.

So What Actually Changes?

Certainly not everything should be SaaS. Every business has unique value-added challenges such that not every problem can or should be outsourced. But the default posture for most organizations should shift in one direction:

Own and operate less, not more.

  • Fewer internally built systems that matter

  • Fewer places where sensitive data lives

  • Fewer surfaces you are personally responsible for defending

Just because you can replace an “expensive” SaaS subscription with an in-house vibe-coded app doesn’t mean you should.

At the same time, place much higher scrutiny on the vendors you rely on, because you are effectively outsourcing one of the hardest problems in technology today.

“Vibe coding” and AI-driven development will absolutely have a place at the margins, in low-risk domains, and in areas where the cost of failure is contained.

But treating that as a replacement for professionally operated, security-hardened systems is a category error.

Reality Check

The meme that SaaS is dead assumes that building software was the hard part. It never was.

The hard part is operating software in an environment where attacks are automated, discovery is continuous, and the patch clock is measured in hours. AI accelerates that problem more than it “solves” it.

In that world, running large bespoke internal systems isn’t conservative. It’s optimistic. And optimism is not a security strategy.

For most organizations, SaaS isn’t just still viable. It’s increasingly the only strategy that actually scales.

If you’re thinking about this right now

If you’re a CTO, CIO, or business or engineering leader trying to figure out:

  • what you should actually keep in-house vs outsource

  • how much risk your current internal systems represent

  • or where AI-driven development is helping vs quietly making things worse

this is the kind of problem I spend most of my time thinking about.

I am available to do advisory and execution work with teams navigating the AI transition. If a few focused conversations to map out where the real risks and leverage points are, feel free to reach out.

Previous
Previous

You Can’t Patch Fast Enough Anymore

Next
Next

Your Browser, Your Geopolitics