The year is 2025, and I’ve been watching teams discover what happens when you give developers AI superpowers without giving them AI super-governance.
It’s like the merchandising scene from Spaceballs: “Vibe Coding: The Flamethrower. The kids love this one.”
But here’s the thing: I’m not here to take away the flamethrowers. I’m here to hand out fire extinguishers and maybe suggest we practice in a safe room instead of the living room.
After decades in this industry — from building release pipelines at Nuance before DevOps had a name, to founding engineer at CloudZero — I’ve learned that every transformative technology follows the same pattern. First comes the gold rush. Then comes the reckoning. Finally, if we’re lucky, comes wisdom.
With AI, we’re speed-running all three phases simultaneously.
The Heroes We Need (Spoiler: It’s You)
Remember the early days of cloud adoption? When “lift and shift” meant literally copying your data center to AWS and wondering why your bill looked like a phone number? We survived that. We even thrived. We built DevOps, created FinOps, and turned chaos into competitive advantage.
Aside: Remember chaos engineering? Find me in real life and I’ll tell you about CloudZero’s humble beginnings in simulating chaos engineering.
Now we need heroes again. Not the cape-wearing kind (though I won’t judge your work-from-home attire). We need the kind of heroes who see a problem and think, “I can build a system for that.”
Because here’s what I know: AI is powerful, useful, and here to stay. It’s not going anywhere. The genie isn’t going back in the bottle. ChatGPT has more users than most countries have citizens. Your competitors are using it. Your colleagues are using it. Your kids are probably using it to write book reports; unfortunately, my kids are using it to send me homegrown brain rot.
The question isn’t whether to use AI. The question is whether we’ll use it wisely.
The Uncomfortable Truth About AI Productivity
I opened my first article in a series focused on AI and productivity with a provocative statement: “AI will not be productive by default.”
Some people read that as anti-AI. They’re missing the point entirely. Saying AI won’t be productive by default is like saying a Formula 1 car won’t win races by default. It’s not an indictment of the technology — instead, it’s a recognition that powerful tools still require skilled operators and proper systems.
OpenAI losing money on $200-per-month ChatGPT subscriptions isn’t a failure of AI. It’s a preview of what happens when transformative technology meets reality. The same reality that says compute costs money, complexity requires governance, and physics still applies even in the digital realm.
Earlier this year, CloudZero’s founder and CTO, Erik Peterson, predicted AI would follow the Jevons paradox (jump to 3:52 in the video if you’re in a hurry).
That theory is entering the zeitgeist, which you can tell as soon as it hits your favorite podcast. The crux: efficiency gains lead to more consumption, not less. The future holds more software development. Who will do it?
The Critical Thinking Imperative
There’s a phrase from Howard Rheingold that’s been rattling around my brain: “Tools for Thought” — from the book of the same name. He wasn’t talking about our current AI (he coined it decades ago), but it perfectly captures what AI should be — not a replacement for thinking, but an amplifier of it. In other words: mind augmentation.
This is where I get on my soapbox: We must not offload critical thinking to AI.
I see it happening already. Developers who treat AI suggestions like divine revelation. Teams who copy-paste without comprehension. Organizations who mistake token generation for strategy.
Your AI doesn’t know your business context. It doesn’t understand your technical debt. It can’t feel the pain of your on-call rotation. It’s a brilliant intern with the world’s knowledge but zero wisdom about your specific situation.
The teams succeeding with AI aren’t the ones who blindly trust it. They’re the ones who challenge it, guide it, and most importantly, think alongside it. They use AI as a collaborator, not an oracle.
The Same Same But Different Problem

You know what’s funny? Every “revolutionary” technology problem is just an old problem in a new costume:
- The Code Constraint: Still about quality, comprehension, and maintainability. You read the code many more times than you write it.
- The Server Constraint: Still about resources, deployment, and scale. 99% of the software lifecycle is operation (hopefully).
- The Wallet Constraint: Still about costs, budgets, and ROI. Nobody has infinite wallet.
The Mythical Man-Month is 50 years old and more relevant than ever. The Theory of Constraints predates computers. Physics doesn’t care about your AI strategy.
But here’s where it gets interesting: the solutions are also “same same but different.”
Extending The DevOps DNA
When I talk about applying DevOps and FinOps principles to AI, I’m not suggesting we dust off old runbooks and change “server” to “model.” I’m talking about taking the DNA of these practices — the systems thinking, the feedback loops, the culture of experimentation — and evolving them for a new world.
From DevOps:
- Version control becomes model and prompt versioning
- CI/CD becomes continuous training and deployment
- Infrastructure as Code becomes AI Behavior as Code
- Observability becomes token tracking and hallucination monitoring
From FinOps:
- Cost attribution becomes token attribution
- Right-sizing becomes model selection
- Reserved instances become API rate negotiations
- Waste reduction becomes prompt optimization
These aren’t just analogies. They’re blueprints. The teams building these systems today are the ones who’ll still be in business tomorrow.
The Call To Action
So here’s my challenge to you, the heroes among us:
1. Be intentional
Stop treating AI adoption like a land grab. Start treating it like city planning. Build the boring infrastructure before you need it. Create governance before you get the scary AWS bill. Document your patterns before they become anti-patterns.
2. Think systems, not tools
Your AI strategy isn’t about which model you use or which framework you adopt. It’s about the systems you build around them. How do you handle failures? How do you track costs? How do you ensure quality? These questions matter more than your choice of LLM.
3. Create feedback loops
AI without feedback is like driving with your eyes closed — fast, exciting, and guaranteed to end badly. Build checkpoints. Create review processes. Measure outcomes, not just outputs.
4. Share your learning
The DevOps movement succeeded because people shared. They blogged about failures. They open-sourced their tools. They created communities. We need the same spirit for AI operations. Your expensive mistake could save someone else a fortune.
5. Remember the humans
Every line of AI-generated code will be debugged by a human. Every AI decision will be questioned by a human. Every AI failure will be fixed by a human. Build your systems accordingly.
Final Thoughts
The AI discourse right now is exhausting. It’s either “AI will save us all!” or “AI is destroying craftsmanship!” We’re stuck between the 100% vibe code evangelists and the 100% artisanal code and cheese-platter purists.
In 2012, Daniel H. Pink noted in “To Sell is Human” that knowledge has moved from sellers to buyers, creating a shift from caveat emptor (“let the buyer beware”) to caveat venditor (“let the seller beware”).
But right now? Every AI post is selling something. Every success story has an agenda. Every failure story has a counter-agenda. A friend once said I give off Susan Powter vibes — well, here’s me in full “Stop the Insanity!” mode.
The DevOps movement succeeded because practitioners shared real experiences. We need the same for AI. Because right now, between the hype and the hate, the real stories are getting lost. And those stories — your stories — are what will help us all build better systems.
Because in the end, we’re all in this together. And the only way through is to help each other build better systems.


