Discover how CloudZero helps engineering and finance get on the same team — and unlock cloud cost intelligence to power cloud profitability
Learn moreDiscover the power of cloud cost intelligence
Give your team a better cost platform
Give engineering a cloud cost coach
Learn more about CloudZero and who we are
Learn more about CloudZero's pricing
Take a customized tour of CloudZero
Explore CloudZero by feature
Build fast with cost guardrails
Drive accountability and stay on budget
Manage all your discounts in one place
Organize spend to match your business
Understand your cloud unit economics and measure cost per customer
Discover and monitor your real Kubernetes and container costs
Measure and monitor the unit metrics that matter most to your business
Allocate cost and gain cost visibility even if your tagging isn’t perfect
Identify and measure your software COGS
Decentralize cost decisions to your engineering teams
Automatically identify wasted spend, then proactively build cost-effective infrastructure
CloudZero ingests data from AWS, GCP, Azure, Snowflake, Kubernetes, and more
View all cost sourcesDiscover the best cloud cost intelligence resources
Browse webinars, ebooks, press releases, and other helpful resources
Discover the best cloud cost intelligence content
Learn how we’ve helped happy customers like SeatGeek, Drift, Remitly, and more
Check out our best upcoming and past events
Gauge the health and maturity level of your cost management and optimization efforts
Compare pricing and get advice on AWS services including EC2, RDS, ElastiCache, and more
Learn moreDiscover how SeatGeek decoded its AWS bill and measures cost per customer
Read customer storyLearn how Skyscanner decentralized cloud cost to their engineering teams
Read customer storyLearn how Malwarebytes measures cloud cost per product
Read customer storyLearn how Remitly built an engineering culture of cost autonomy
Read customer storyDiscover how Ninjacat uses cloud cost intelligence to inform business decisions
Read customer storyLearn Smartbear optimized engineering use and inform go-to-market strategies
Read customer storyOne of the mistakes organizations make related to Kubernetes costs is addressing them primarily after-the-fact, once the application is running successfully.
One of the mistakes organizations make related to Kubernetes costs is addressing them primarily after-the-fact, once the application is running successfully. There are certainly changes that can be made to improve efficiency once the application is running, but cost-control measures are best considered at the beginning, not middle or end, of the application lifecycle.
Addressing kubernetes costs appropriately requires treating it more like security. Organizations are encouraged to address security earlier and earlier in the lifecycle — ideally it should be baked-in to the entire development process. Cost controls are the same. They are most effective when considered at the outset, because decisions related to the application architecture — some of which are hard to change later — can have a big effect on overall costs.
Here are some of the top ways that architectural decisions can make costs go up… or down.
Understanding in advance how network traffic is going to move through your application and what the cost implications of that traffic will be can help reduce kubernetes costs or, at the very least, ensure that when you’re paying for inter-zone or inter-region traffic, it’s because you’ve made a conscious decision that having a cluster that spans zones or regions is important.
The same goes for choosing to use a network address translation (NAT) gateway. These can get expensive, so organizations should use them only when actually needed and ensure they are properly configured.
Every organization thinks about how to manage auto-scaling… as long as we’re talking about scaling up. Demand for your service will not stay 100% stable, though, nor is it likely to be growing steadily upward every minute of every day. The reason auto-scaling is so attractive is that it allows organizations to only provision the resources they actually need at the moment, but this strategy depends on the ability to scale down as soon as those resources aren’t needed.
Don’t think this is only something struggling companies should worry about. Even if your user base is increasing steadily, there is probably fluctuation in usage based on time of day, day of the week and season. If you’re a retailer, you expect to scale up on Black Friday… but have you thought about how you’ll scale back down?
To the extent possible, organizations should rely on managed services, whether from a cloud provider or a third-party vendor, to supply everything that doesn’t create additional value for the company. The only code that should be inside the cluster should be differentiated workloads that are unique to your organization. This approach saves engineering time during development and operations, but also gives the organization more flexibility in the types of instances to run on and ultimately keeps cloud costs down.
As long as workloads are stateless, there’s no reason not to use spot compute instances, which can reduce costs dramatically. The biggest argument against spot instances is that they can disappear — but as long as the workload is kept stateless, this isn’t a problem. If organizations are diligent about keeping only differentiated services in the cluster and storing data elsewhere, this shouldn’t be a problem.
The tendency to think of Kubernetes as a tool to achieve a truly portable application, one that can move seamlessly between clouds, leads many organizations to waste both time and money. Making applications portable is very difficult — the level of skill required to succeed does exist, but in almost every case would be better used creating more value for the organization. Chasing a multicloud strategy also prevents teams from taking advantage of individual cloud providers’ managed services, leaving them responsible for installing and operating a much wider range of services, usually much less efficiently than if the cloud provider had done it.
It’s unlikely that people will start talking about DevCostOps anytime soon, but perhaps they should. We live in a time when compute resources, at least in the cloud, are near infinite. You can always scale up — as long as you can pay for it. Now compute resources aren’t limited by the number of machines in the data center, but they are still limited by the organization’s bank account. Taking costs into account as soon as possible in the development process helps organizations get more value out of the resources they do pay for. CloudZero’s Kubernetes cost monitoring can help organizations look at kubernetes costs sooner in the development process and make better decisions about how to allocate their financial resources.
Erik Peterson is the co-founder and CTO of CloudZero, is a software startup veteran, and has been building and securing cloud software for over a decade. Erik is a frequent speaker on FinOps, cloud economics, DevOps, and security.
CloudZero is the only solution that enables you to allocate 100% of your spend in hours — so you can align everyone around cost dimensions that matter to your business.