Vibe Coding Is Not Software Engineering — And That Should Worry You

Back in the late 1990s, when I was a student (and in the 2000’s an adjunct instructor) in the Master of Software Engineering program at Seattle University, our classrooms reflected the industry’s cultural divide. On one side were dotcom-era developers — stereotypically moving fast, breaking things, and often leaving quality at the door. On the other were engineers from Boeing and various Boeing subs — focused, deliberate, and deeply committed to building safe, maintainable systems designed to last. Microsoft folks tended to fall somewhere in between. The debates we had then felt personal, even existential, to our identity as practitioners in the software industry.

Today, the central issue in that ongoing debate has a new name: vibe coding.

So What is Vibe Coding?

Unless you’ve been living under a rock the past few months, you know that vibe coding is the cool new way of building software using AI. Instead of writing code by hand, you just describe what you want in plain language and let an LLM-based coding assistant do the rest. Your role shifts from coding to prompting, testing, and tweaking whatever the AI gives you with additional prompts until you have a system that appears to “work”.

It’s easy to see why this appeals to business folks: Non-technical users no longer need to go through IT to build the tool they’ve always wanted. Executives love the idea of reducing expensive software engineering headcount while continuing to crank out “working” software. But what’s more surprising—and honestly, a little baffling—is how much this approach appeals to software developers who should know better.

Folks who’ve spent years learning what good engineering looks like are now cheering on a trend that actively undermines those very principles.

Coding by Vibe: Fast, Loose, and Dangerous

Software engineering has always struggled for legitimacy among traditional engineering disciplines. We move fast, often break things, and too often prioritize novelty over reliability. Now, with AI tools doing most of the typing, the temptation to skip the hard parts—requirements, design, testing, documentation—grows even stronger.

Vibe coding may work fine for simple tools, proof-of-concepts, or weekend hacks. But when adopted uncritically in production environments, it invites serious risk:

  • Security vulnerabilities stemming from unvetted or misunderstood code, generated from AI models trained on sketchy, often amateur samples from StackOverflow and GitHub.

  • Unmaintainable systems full of spaghetti logic that no one understands and tied together in a monolith without defined modules and boundaries.

  • Critical bugs hiding in the seams between different AI-generated blocks of code that may only show up in boundary cases missed by cursory testing.

  • Compliance and governance failures where safety, privacy, or legal standards are not met.

I hope you wouldn’t accept “vibe engineering” for a bridge or a medical device. Why is software, which now runs our cars, planes, hospitals, and financial systems, any different?

Maybe It’s Time We Talk About Licensing — Again

This isn’t a new debate. Back in the graduate program, we discussed the idea of licensing software engineers—just like civil, electrical, or mechanical engineers. The idea was controversial then, and still is now. But perhaps it’s time to take it seriously again.

We can’t allow vibe-based development to become the norm without serious guardrails. That means proper standards, proper oversight, and yes — more conversations about professional responsibility in software. If AI is generating production code, maybe someone should have to formally sign off on it — just like a licensed engineer does for a bridge or a building. And maybe that sign-off should come with real accountability, including professional liability. Otherwise, we’re outsourcing critical decisions to systems that notoriously can’t be held responsible — and letting developers dodge the responsibility too.

AI Is a Tool, Not a Substitute for Discipline

Over the past few years, a big part of my work has been helping teams understand and embrace engineering discipline — not as a buzzword, but as a foundation for building reliable, maintainable, and secure systems. I’ve spent countless hours talking with engineers and managers about why discipline matters: writing clear requirements, designing carefully, testing thoroughly, and documenting as you go. These aren’t outdated practices—they’re what separate real engineering from reckless hacking.

AI and LLM-based coding assistants can absolutely help accelerate the work, but they don’t replace the thinking. They have no proper context for your system’s architecture, your regulatory environment, your scalability and security needs, or the lessons your organization has learned over time. If you rely on AI without the discipline to guide and validate its output, you're not moving faster—you’re just generating bad code faster.

Discipline doesn’t slow you down. It’s what keeps the whole thing from crashing later.

Let’s Build Software Intentionally

This is a personal topic for me because I’ve worked across the full spectrum of software development — from fast-moving startups racing to ship to stay alive, to mature software engineering organizations focused on scale and resilience, to IT teams inside leading manufacturers where safety, security, compliance, and long-term maintainability are mission-critical. I’ve seen the consequences of cutting corners — and I’ve seen how applying real engineering discipline can transform not just the software, but the entire development culture.

If you’re a tech leader or founder trying to make sense of AI-driven development, this is your moment to invest in sustainable, high-quality engineering. Let’s move beyond flashy demos and build systems that are robust, secure, and built to last.

Want to do it right? Let’s talk.
I work with startups and engineering teams to raise the bar on code quality, improve development practices, and integrate AI tools responsibly — without sacrificing engineering rigor. If you're ready to build something worth trusting, reach out

Previous
Previous

Who’s Teaching Whom? The Future of AI Code Training

Next
Next

If You're Gonna Vibe Code, At Least Take Testing Seriously