Retrieval-Augmented Generation Only Works if Retrieval Works First
Enterprise AI pilots often shine in demos but fall flat in production. Why? Teams expect the LLM to clean up messy retrieval, and it can’t. In this post I call out the common anti-patterns, outline how to engineer retrieval properly, and share a playbook for RAG that actually works.
From “Instant Search” to “Patient AI” in Enterprise UX
By now everyone’s heard the claim that 95% of enterprise GenAI projects “fail.” But most of those failures aren’t about the quality of the models. They’re about mismatched user expectations. We’ve spent 20 years training people to expect instant Google-style search results, and now we’re dropping LLMs into that paradigm. It’s no wonder users are frustrated.
What Happens to Software Development If GenAI Tops Out?
Many developers assume the next big AI release will close the gap between flashy demos and production-ready systems. But what if that last 20% never arrives? In this post, I look at what stalled progress in GenAI means for building business systems, and how we can design for robustness instead of wishful thinking.
Plan to Throw One Away — Especially If It's Vibe-Coded
LLM coding assistants have made prototyping easier than ever, but they haven’t changed the fundamentals of software engineering. In this post, I explain why vibe-coded systems should almost never go directly to production, and what to do if your team is headed in that direction.
Don’t Blame the Tools for Acting Like Tools
When AI agents are given broad control over critical systems, mistakes aren’t just possible — they’re inevitable. I explain why LLMs, as advanced as they seem, are fundamentally statistical guessers, not trustworthy decision-makers. This is a must-read for anyone considering AI-driven automation in production.
Why Code Reviews Should Include Prompts in the Age of AI
Prompting an LLM is the new coding, but we're still only reviewing the output. What if your team reviewed the prompts alongside the code? Let’s explore the idea of “code+prompt” review, and what it could mean for quality, security, and engineering culture.
Why AI Model Collapse Is Bullish for Real Developers
As the web fills with AI-generated code and confident nonsense, learning to code is quietly getting harder. The bottom rungs of the ladder are starting to rot. But for developers with real experience and the judgment to spot garbage when they see it, this may be the start of a golden era. Let’s talk about model collapse, sludge, and why real engineering still matters.
Impressive, But Wrong: The Hidden Risk of LLM-Generated Documentation
Automated documentation from AI is a powerful new tool, but it’s far from perfect. Without careful review and context, it can lead to costly mistakes and lost trust. If you rely on AI-generated docs in your projects, this article offers insights to avoid common pitfalls.
If Non-Competes Are So Valuable, Then Maybe Companies Should Pay For Them
Non-compete agreements have long been a standard tool in tech — but they often restrict talent more than they protect real risks. What if companies had to pay employees for any restrictions they impose? This approach would encourage fairer, more thoughtful use of NCAs and promote a healthier, more competitive industry.
Who’s Teaching Whom? The Future of AI Code Training
AI is writing more of our code — but can it keep getting smarter if it's learning mostly from itself? In this post, I explore the hidden risks of AI-on-AI training, what we lose when code loses its "soul," and how engineers can stay in the loop.
Vibe Coding Is Not Software Engineering — And That Should Worry You
Vibe coding — AI-assisted software development driven by prompts instead of planning — is taking the tech world by storm. It’s fast, flashy, and appealing to executives and end-users alike. But it also revives every old critique about software developers being reckless and undisciplined. In this post, I explain why vibe coding isn’t engineering — and why that should worry anyone who cares about software quality, safety, security, and sustainability.
If You're Gonna Vibe Code, At Least Take Testing Seriously
So you’re vibe coding now — just prompting an LLM until things sorta work? Fair enough. But let’s talk about what actually keeps your software from breaking in production. Spoiler: it’s not your autogenerated unit tests. This post dives into why testing strategy matters more than ever, especially when code understanding is shallow.
YAGNI Might Be Costing You More in the Long Run
Many agile teams take the "You Aren't Gonna Need It" (YAGNI) principle too far, dismissing future-proofing and flexibility in favor of short-term simplicity. But while YAGNI might save time now, it can lead to costly technical debt and retrofitting when your system needs to evolve
Blaming "Low Performers" in Tech Layoffs
Tech layoffs are tough enough without companies adding insult to injury by framing them as a purge of 'low performers.' This harmful narrative unfairly stigmatizes employees, making their job search harder while protecting executives from accountability. Worse yet, companies that take this approach risk damaging their reputation and future hiring prospects.
Prompting is Coding By Another Name
AI is transforming coding by redefining what it means to code. Writing prompts for LLMs is more about structured shorthand than perfect sentences, resembling programming itself. Like higher-level languages, LLMs shift the focus from syntax to architecture and validation. The challenge is not just generating code, but ensuring it integrates into a well-designed, high-quality system.
Rethinking Tech Interviews: Why Teams Should Let Candidates Use Google and ChatGPT
Tech interviews are broken, and it's time for a change. Expecting candidates to solve complex coding problems from memory without access to modern tools like Google and ChatGPT doesn't reflect how engineers actually work. In the real world, problem-solving, critical thinking, and leveraging available resources are key to success. Read on to discover why allowing candidates to use these tools can lead to better hires and a more effective interview process.
Rethinking the Role of AI and Coding Assistants in Software Development
AI can be more than just a tool for efficiency—it’s a game-changer for creativity and innovation. Instead of fearing job loss or focusing on cost-cutting, developers and businesses can use AI to amplify their capabilities and tackle challenges in new ways.
DevOps: A Set of Practices, Not a Job Role
"DevOps" is often misunderstood as a job role. This misconception can lead to stress, burnout, and suboptimal outcomes for both individuals and businesses. By treating DevOps as a set of practices, organizations can foster better collaboration, reduce stress, and achieve more sustainable results.
Software Companies Will Be the Long-Term Winners of the AI Revolution
Who will win the AI revolution? History shows the real value lies not in hardware but in software innovation. Discover why user experience and product thinking will define the future—and which companies might lead the charge.
Cloud Native or Cloud Chaos?
Explore the intricate journey of building your own cloud with CNCF tools. Discover the balance between control and complexity, and decide if the cloud-native path is the right choice for your organization.