Vibe Coding Is Real—But So Is the Hangover
It started, as many questionable decisions do, at 11pm.
I had a feature to build—nothing groundbreaking, just a data pipeline to process webhook events and dump them into a time-series store. The kind of thing I've built a dozen times. But instead of opening my editor and starting fresh, I opened my AI assistant and just... described what I wanted.
Twenty minutes later, it was done. Kind of.
The code worked—in the narrow, optimistic sense that it ran without errors and passed the happy-path test I'd written. But as I read through it, something felt off. The error handling was there, but it was defensive in the wrong places. The retry logic assumed exactly the kind of failure modes I'd never actually seen in production. And there was a subtle concurrency bug that the tests would never catch because they were single-threaded.
This is the vibe coding hangover.
What "Vibe Coding" Actually Is
The term, popularized by Andrej Karpathy in early 2025, describes a workflow where you describe what you want in natural language and let AI models turn those descriptions into code—without necessarily reading or understanding every line they produce. The "vibe" is that you're operating at the level of intent, not implementation.
For prototypes, throwaway scripts, and exploratory work, it's genuinely transformative. I've shipped more side projects in the last year than in the previous three combined. The activation energy for starting something new is close to zero.
But for production systems? The hangover is real.
The Gap Between Working and Right
Here's what I've observed: AI coding assistants are exceptional at producing code that looks correct and behaves correctly under normal conditions. They've ingested enough software to know what well-structured code looks like. The patterns are right. The naming conventions make sense. The tests pass.
What they struggle with is the kind of knowledge that comes from being in an on-call rotation at 3am and debugging a race condition that only manifests under load. The institutional knowledge of why retry logic uses exponential backoff with jitter. The muscle memory of checking for off-by-one errors in date range calculations.
This knowledge isn't in documentation. It's in the scar tissue of every engineer who's shipped and broken something in production.
The Lever Problem
There's a concept in systems thinking about high-leverage interventions—the places where a small change has an outsized effect. AI coding assistance is a lever. A very powerful one.
But here's the thing about levers: they amplify both your good instincts and your bad ones. If you know exactly what you want and can specify it clearly, AI tools help you get there faster. If you're vague about requirements or uncertain about edge cases, those tools will produce something that confidently does the wrong thing.
The developers I see struggling with AI assistance are often the ones who want it to replace thinking, rather than accelerate it. The ones thriving are using it as a very fast implementation layer for decisions they've already made.
What I've Changed
After a few too many afternoons debugging AI-generated code I didn't fully understand, I've settled into a workflow that works:
- Design before prompting. Before I ask an AI to write anything non-trivial, I sketch out the approach myself. What are the failure modes? What are the invariants? I write these down in comments or a brief doc, then use those as the specification in my prompt.
- Read everything. Yes, everything. I know that defeats some of the efficiency gains. But reading AI-generated code with the intent to understand it—not just approve it—makes me both a better reviewer and a better prompter.
- Test the unhappy paths. AI assistants write test suites optimized for passing, not for finding failures. I add tests for the scenarios that would actually hurt: network partitions, malformed input, concurrent modifications, clock skew.
- Treat it like code review, not code generation. The mental model shift that's helped me most: I'm not asking AI to write code for me. I'm asking it to produce a first draft that I'll review and refine. This keeps me in the driver's seat without sacrificing the speed benefits.
Where This Leaves Us
I don't think vibe coding is a fad. The productivity gains are too real, and the tools are getting better faster than our skepticism can keep up. In another two years, the gap between what AI can generate and what passes production muster will probably narrow considerably.
But right now, in April 2026, the most effective AI-assisted developers I know are the ones who've retained their ability to think critically about the code they ship—who use AI to move faster, not to think less.
The vibe is good. Just don't forget to hydrate.
Herman Mak writes about software engineering, systems thinking, and whatever pseudorandom thoughts won't leave him alone. Subscribe below to get new posts delivered straight to your inbox.