Table Of Contents
Networking Scaling down Keep Cluster Contents Differentiated Keep Workloads Stateless Avoid Multi-Cloud

One of the mistakes organizations make related to Kubernetes costs is addressing them primarily after-the-fact, once the application is running successfully. There are certainly changes that can be made to improve efficiency once the application is running, but cost-control measures are best considered at the beginning, not middle or end, of the application lifecycle. 

Addressing kubernetes costs appropriately requires treating it more like security. Organizations are encouraged to address security earlier and earlier in the lifecycle — ideally it should be baked-in to the entire development process. Cost controls are the same. They are most effective when considered at the outset, because decisions related to the application architecture — some of which are hard to change later — can have a big effect on overall costs. 

Here are some of the top ways that architectural decisions can make costs go up… or down. 

Networking

Understanding in advance how network traffic is going to move through your application and what the cost implications of that traffic will be can help reduce kubernetes costs or, at the very least, ensure that when you’re paying for inter-zone or inter-region traffic, it’s because you’ve made a conscious decision that having a cluster that spans zones or regions is important. 

The same goes for choosing to use a network address translation (NAT) gateway. These can get expensive, so organizations should use them only when actually needed and ensure they are properly configured

finops-automation-series-thumbnails

Scaling down

Every organization thinks about how to manage auto-scaling… as long as we’re talking about scaling up. Demand for your service will not stay 100% stable, though, nor is it likely to be growing steadily upward every minute of every day. The reason auto-scaling is so attractive is that it allows organizations to only provision the resources they actually need at the moment, but this strategy depends on the ability to scale down as soon as those resources aren’t needed. 

Don’t think this is only something struggling companies should worry about. Even if your user base is increasing steadily, there is probably fluctuation in usage based on time of day, day of the week and season. If you’re a retailer, you expect to scale up on Black Friday… but have you thought about how you’ll scale back down?

Keep Cluster Contents Differentiated

To the extent possible, organizations should rely on managed services, whether from a cloud provider or a third-party vendor, to supply everything that doesn’t create additional value for the company. The only code that should be inside the cluster should be differentiated workloads that are unique to your organization. This approach saves engineering time during development and operations, but also gives the organization more flexibility in the types of instances to run on and ultimately keeps cloud costs down. 

Keep Workloads Stateless

As long as workloads are stateless, there’s no reason not to use spot compute instances, which can reduce costs dramatically. The biggest argument against spot instances is that they can disappear — but as long as the workload is kept stateless, this isn’t a problem. If organizations are diligent about keeping only differentiated services in the cluster and storing data elsewhere, this shouldn’t be a problem. 

Avoid Multi-Cloud

The tendency to think of Kubernetes as a tool to achieve a truly portable application, one that can move seamlessly between clouds, leads many organizations to waste both time and money. Making applications portable is very difficult — the level of skill required to succeed does exist, but in almost every case would be better used creating more value for the organization. Chasing a multicloud strategy also prevents teams from taking advantage of individual cloud providers’ managed services, leaving them responsible for installing and operating a much wider range of services, usually much less efficiently than if the cloud provider had done it. 

It’s unlikely that people will start talking about DevCostOps anytime soon, but perhaps they should. We live in a time when compute resources, at least in the cloud, are near infinite. You can always scale up — as long as you can pay for it. Now compute resources aren’t limited by the number of machines in the data center, but they are still limited by the organization’s bank account. Taking costs into account as soon as possible in the development process helps organizations get more value out of the resources they do pay for. CloudZero’s Kubernetes cost monitoring can help organizations look at kubernetes costs sooner in the development process and make better decisions about how to allocate their financial resources. 

The Modern Guide To Managing Cloud Costs

Traditional cost management is broken. Here's how to fix it.

Modern Cost Management Guide