<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1310905&amp;fmt=gif">

Understanding the Complete Cloud Cost of Kubernetes

|September 4, 2020|

When organizations think about the relationship between Kubernetes and cloud costs, they often focus on Kubernetes’ auto-scaling capabilities and what this means for optimizing compute resources. Kubernetes does allow organizations to provision compute resources more thinly, because the platform allows them to scale up automatically if there’s a demand spike in the middle of the night. 

Configuring Kubernetes in a way that takes advantage of these cost-saving techniques isn’t automatic, and many organizations make mistakes that lead to higher-than-necessary compute costs. In addition, compute is not the only line item in anyone’s cloud bill. Many organizations ignore the other costs, related to networking or storage, for example, that are connected to Kubernetes implementations. Here are the issues you have to consider when you’re calculating the complete costs of running Kubernetes in the cloud. 

 

Cluster size

Just because Kubernetes has auto-scaling capabilities doesn’t mean that everyone understands correctly how they work—in fact, most people don’t. In addition, Kubernetes is running on a cloud provider, and optimizing Kubernetes requires understanding not just how Kubernetes works, but also the difference between spot instances, reserved instances and on demand and how to optimize workloads for the best type of compute resource.

Many people end up creating clusters that are too large, because they don’t understand how much compute the workload will need. Then they allocate too much memory to the underlying nodes. In the end, the clusters are not cost-effective. 

 

Networking

There are people out there who have built Kubernetes clusters that span across regions or availability zones. This is usually an unforgettable experience for those people, because it can be extremely expensive. 

Inter-region and inter-zone traffic is not free. Though it can seem inconsequential at the time because the cost per gigabyte seems small, companies both underestimate the amount of traffic and fail to remember that both the sending and the receiving service have to pay, so the real cost is double the estimate. 

If you have multiple Kubernetes clusters that have to communicate with each other and each one is in its own Virtual Private Cloud (VPC), you can also end up needing a managed network address translation (NAT) gateway. These get expensive quickly — we’ve had customers spending tens of thousands of dollars per month on NAT gateways. 

Properly configuring the VPC or simplifying the network architecture can reduce those costs substantially, but not all organizations know how to do so. 

 

Storage

But wait, workloads in Kubernetes are stateless, right? Sure (though not always)… but that doesn’t mean there’s no storage associated with it. Some organizations over-provision disk storage, often just leading to waste. 

Regardless of how you manage storage, it can also have networking ramification as well, leading to increased costs both for the storage as well as for data transfer. 

 

Unnecessary multicloud strategies

Some organizations feel strongly that one of Kubernetes’ primary advantages is its ability to run in multiple clouds. This leads them to spend huge amounts of engineering resources both developing and operating the application, because they fail to take advantage of the cloud provider’s native tools. In many cases, not only does this dramatically increase costs, it also reduces functionality. 

Organizations often don’t think about the time required to develop, deploy and operate Kubernetes in the cloud. The expense of a multicloud strategy mostly shows up in payroll, not a cloud provider bill, so it doesn’t always make it into the cloud cost conversation. That can be a costly mistake. 

 

Monitoring

Any production workload needs monitoring. As organizations adopt Kubernetes, they can either build monitoring tools themselves or use a third-party vendor. Neither option is free — there are costs either in payroll or in software license fees. But those direct costs are usually obvious. The more insidious monitoring costs are related to actually running the monitoring tools in the cloud environment. Monitoring tools often work by deploying an agent inside the cluster, and that agent will consume compute resources — compute resources that the company is paying for. In addition, monitoring tools extract logs and call APIs, both of which have costs associated with them. 

These monitoring costs are often buried in the cloud provider bill. Unless organizations know to look for them, the complete cost of kubernetes can be obscured. 

 

Conclusion

Calculating the true cost of a Kubernetes deployment shouldn’t focus just on the compute resources needed to run your clusters. It has to take into account the entirety of the resources needed to run the application — the networking involved, the databases, the various storage systems, the human resources and the monitoring system. 

The cost of kubernetes, like all cloud cost management, is a question of prioritization. Making a more expensive choice isn’t necessarily wrong, as long as it’s done based on your current priorities and goals. The challenge with Kubernetes is that most people are still stuck on the learning curve, and are making choices that increase costs without providing any benefit for the organization. Using CloudZero’s Kubernetes cost monitoring you can get a better picture of how Kubernetes relates to your cloud bill, as well as clarity about how to make changes in Kubernetes to reduce your bill. 

Learn More About CloudZero

CloudZero is the first real-time cloud cost platform designed specifically for engineering and DevOps teams.

Get a Live Demo
See The Platform
bkg_threeHexes

Subscribe to blog updates