After three days of demos, sessions, and hallway conversations at KubeCon Atlanta, one thing became clear to CloudZero CTO Erik Peterson: the cloud-native world is shifting from cost control to value engineering.
Teams aren’t just fighting bills anymore. They’re fighting complexity, GPU scarcity, Kubernetes sprawl, and pressure from the business to justify every dollar of technical investment. And this year’s KubeCon attendees? They were ready for those conversations.
Here are eight signals from Erik’s KubeCon experience on the floor via his perpetually tuned-in lifelogs — and what they reveal about what’s coming next.
1. Cost Is Becoming Telemetry — A Direct Signal Of Engineering Quality
This theme surfaced repeatedly in hallway and booth conversations: cost is starting to behave like an engineering signal in its own right.
Erik kept hearing the same takeaway: cloud cost isn’t just a financial artifact — it’s runtime evidence of how efficiently a system converts resources into business value.
One quote captured it well: “Cost isn’t about money spent — it’s indicative of the efficiency by which a system achieves the economic goals of the business… it is actually a telemetry signal.”
“Cost isn’t about money spent — it’s indicative of the efficiency by which a system achieves the economic goals of the business… it is actually a telemetry signal.”
This framing resonated strongly with practitioners who are already drowning in latency metrics, error budgets, saturation gauges, and dashboards. Cost, they said, is becoming just as operationally meaningful.
- A spike in spend might mean mis-scheduled workloads.
- A drop could indicate scaled-down usage or customer churn.
- A gradual climb might track directly with adoption or inefficiencies.
Engineers get telemetry. Finance gets dollars. Cost-as-telemetry bridges the two — making it a shared language for value.
Why this matters:
It reframes cloud cost work from reactive budget policing to proactive engineering enablement. That makes “cost” something developers actually want to pay attention to.
The trend now:
FinOps is converging with observability. Cost becomes part of the engineering feedback loop, not a quarterly surprise.
2. AI Optimization Is Exponential — Tiny Changes Create Massive Value
One of the most-discussed examples from the keynote came from OpenAI Technical Staff Member Fabian Ponce, whose story spread quickly through hallway conversations. A single code change yielded massive capacity gains:
“One line of code saved like 60,000 vCPUs… they boosted the capacity of the system.”
This anecdote captures a fundamental shift: AI infrastructure amplifies both inefficiency and optimization.
Traditional workloads scale linearly — a tweak might save 2%, maybe 5%. But with AI, inefficiency multiplies exponentially:
- GPU queues get clogged
- Model-serving pipelines back up
- Training jobs starve clusters
- Autoscaling reacts too late or too aggressively
Conversely, a micro-optimization — a better batch size, a more efficient embedding flow, a more intelligent scheduler — can unlock millions in infrastructure value.
KubeCon attendees heard stories like this again and again. AI infra engineers talked less about cost and more about capacity, throughput, and reliability — the things that determine whether their teams can ship new AI capabilities at all.
Why this matters:
With GPUs scarce and demand exploding, the fastest way to “scale” AI is to optimize it.
The trend now:
Optimization becomes a strategic lever for AI teams — not a cost-cutting exercise, but a capacity multiplier.
Related read: The Anti-Zombie, Battle-Tested Guide To AI FinOps: 10 Insights
3. K8s Teams Expect Workload-Level, Prescriptive Guidance
Kubernetes practitioners don’t want another dashboard. They want concrete, workload-level guidance that stabilizes services and ties resource decisions back to business outcomes. When engineers saw how CloudZero’s agent reconstructs pod behavior, the reaction was immediate.
CloudZero Senior Software Engineer Rob Hocking’s demos captured this dynamic. “If a pod is being killed… we notice that and say, ‘Hey, increase a little bit. Here’s what we think you should use,’” he’d explain, and attendees would lean in.
The problems actually raised were operational: OOM kills, noisy neighbors, stale requests/limits, and recurring patterns that make systems unpredictable.
When recommendations are grounded in 30 days of real behavior and coupled with multi-dimensional cost allocation, the value is obvious. Engineers can see not only what to change, but what that change means for cloud unit economics and for product metrics.
That’s resource intelligence: safety-first, workload-aware advice that reduces toil, improves reliability, and maps directly to measurable outcomes.
Why this matters:
Practitioners want actionable guidance — not theoretical optimization. They want cost allocation that reflects how their systems behave and cloud unit economics that inform tradeoffs between performance, capacity, and spend.
The trend now:
Operational guidance is shifting from periodic spreadsheet work to continuous, model-driven resource intelligence: detect → recommend → (where appropriate) automate — integrated into platform workflows and CI/CD.
4. AI Belongs In The Foundation — Not Scattered Across Features
During CloudZero’s internal product meeting at the event, leaders aligned around a crucial principle: AI should appear in the product as a unified, cohesive capability.
“[CloudZero’s Advisor] is the chat interface… but CloudZero Cloud Cost Intelligence will be embedded into everything.”
This direction avoids the trap many vendors fall into — where each squad tacks on a separate AI widget or experimental feature, leading to a fractured, inconsistent UX.
Instead:
- CloudZero Cloud Cost Intelligence becomes the underlying layer.
- Ask Advisor becomes the user-facing entry point.
- The intelligence quietly enhances existing surfaces.
This model mirrors the AI direction of the most advanced SaaS and cloud platforms — tightly integrated, brand-coherent, flexible enough to grow.
Why this matters:
Customers want trust and clarity. A unified AI model feels intentional, stable, and scalable.
The trend now:
AI is becoming an expectation — not a feature — and product leaders are standardizing how it should show up.
5. Elite Engineering Teams Are Doubling Down on Prioritization + Automation
Engineering leaders repeated a mantra at the show and in your own internal discussions:
focus on one priority at a time — and automate any work that repeats.
“First time you do something, write it down. Second time, follow instructions. Third time, automate.”
“First time you do something, write it down. Second time, follow instructions. Third time, automate.”
This isn’t new, but what is new is how rigorously technical leaders are enforcing it to manage the explosion of AI- and Kubernetes-related complexity.
Faced with conflicting roadmaps, constant inbound requests, and relentless platform growth, high-performing teams are embracing:
- stricter scopes
- serialized top priorities
- frameworks for saying no
- automation as the default answer to toil
This is the only way to ship large-scale initiatives without growing headcount endlessly.
Why this matters:
Companies that don’t standardize prioritization and automation will drown in operational overhead.
The trend now:
Engineering cultures are professionalizing — mirroring the discipline of product and design ops.
6. Executives Want A Single Benchmarkable KPI For Cloud Efficiency
FinOps leaders discussed a recurring problem: every company uses different efficiency metrics, making them impossible to compare.
Enter the Effective Savings Rate (ESR) — a KPI that blends rate‑efficiency (what you pay) and usage‑efficiency (how well you use it).
Overheard: “If an exec asked, ‘What’s CPU utilization across fleet?’, you need rate and usage.”
Execs don’t want 30 charts. They want one number that tells them:
- Are we efficient for a company our size?
- Are our commitments helping or hurting?
- How do we compare to peers?
- Where should we invest next?
This demand keeps rising as cloud spend becomes a top-3 line item at many companies.
Why this matters:
Without a common KPI, cloud efficiency cannot be governed or improved systematically.
The trend now:
The FinOps community is converging on benchmarkable KPIs — and vendors who support them will win executive trust.
Related read: Our monthly Cloud Economics Pulse benchmark report is based on CloudZero network data and provides a glimpse into current cloud cost trends.
7. KubeCon’s Audience Has Matured — The Problems Are Bigger Now
Many attendees noted how different the crowd felt compared to Detroit: “Last time… wrong people. This year… people are talking about big problems.”
This was not a Kubernetes 101 crowd. These were:
- platform leads running massive clusters
- SREs managing AI fleets
- FinOps practitioners responsible for tens of millions in spend
- engineers supporting mission-critical infrastructure
The “weekend experimenter” energy is disappearing. The team-in-the-trenches energy is rising.
Why this matters:
Higher-quality conversations bring better signals for product direction, partnerships, and GTM strategy.
The trend now:
Conferences are becoming centers of gravity for systems-scale challenges — not just exploration or tooling shopping.
8. The Message That Resonated Everywhere: ‘We Engineer Profit.’
In dozens of booth conversations, one line consistently broke through the noise, from Erik himself:
“If someone asked what CloudZero does, I’d say: We engineer profit.”
“If someone asked what CloudZero does, I’d say: We engineer profit.”
This hit a nerve for both engineers and executives. Engineers saw a path to professional advancement: “This helps me get promoted.”
Execs saw a path to business outcomes: “This helps us improve margins.”
Very few vendors articulate cloud value in a way that serves both audiences simultaneously.
But when the message becomes this clear, that cost optimization becomes career development and margin improvement in one go, it changes how people see the category.
Why this matters:
It reframes cloud cost from a CFO problem to an engineering accomplishment.
The trend now:
The most powerful GTM messaging in cloud links engineering work directly to business value — and to engineers’ own success.
Cloud Optimization Is Evolving Into Value Engineering
Taken together, these signals reveal something bigger than a set of trends — they point to a mindset shift already underway across the cloud-native ecosystem.
Kubernetes, AI, and platform teams aren’t just reacting to rising costs. They’re reshaping how they define efficiency, how they ship infrastructure, and how they tie technical work back to business value. Observability now includes cost. AI optimization is about multiplying capacity, not just trimming spend. And engineers are being asked and empowered to show impact, not just performance.
This is the world CloudZero is building for. And if KubeCon Atlanta is any indicator, it’s the world the rest of the industry is moving toward — fast.
Want to read more from the floor? Check out our three signals showing where K8s is heading next.


