If cost optimization is your only reason for adopting Kubernetes and containers, you might be in for a rude surprise — many companies find that costs increase after moving to Kubernetes. Even companies who adopt Kubernetes for other reasons, like time-to-market advantages, should follow basic cost control best practices to stay within the budget.
Optimizing cloud costs related to running Kubernetes doesn’t have to involve trade-offs for performance or availability. As with most types of cloud cost control, the key to following Kubernetes cost control best practices is to get visibility into how you are using cloud resources and reduce waste. In most cases, organizations can reduce costs substantially before they have to think about making trade-offs.
As with most Kubernetes-related best practices, only some of these are directly related to technology choices and architectural decisions. Cultural and organizational decisions, like how you talk about costs, how you integrate cost management into your workflow and how you tackle cost issues can be just as important for continued success.
Get Deep Visibility
The first step to optimizing costs related to running Kubernetes is to get deep, granular visibility into how Kubernetes influences costs. Understanding the total cost of running an application per day or per hour isn’t enough to start making changes that can bring costs down. Organizations should have access to the following information about their Kubernetes deployment:
Provisioned compute vs actual compute usage
Current performance vs performance targets
Memory, CPU, and disk usage
What jobs are running at any given moment, and where they are running
How traffic is moving throughout the system
The costs of everything other than compute, including things like storage, data transfer and networking
A map how things run on the cluster
How much the application costs to run right now as well as trends in cost change
The complete cost picture, including the costs related to running monitoring tools
With this information, organizations can make informed decisions about adjusting resource provisioning and/or changing the application architecture to reduce costs without impacting performance or availability.
Measure Before And After Costs
Organizations should start considering costs one of the operational metrics to track as part of the engineering process. Just as it’s normal to measure performance and uptime before and after major and minor changes, measuring cost changes should be a part of the operational practice.
Similarly, just as organizations have service level objectives related to performance and availability, they should have internal guidelines related to how much it’s acceptable for an application to cost to run. They should be able to measure, understand and then accept or decline those costs after a change is made to the application.
Each application has a different role and different priorities. The key is to become aware of how costs fit into the decisions made related to that application. There’s no ‘good’ or ‘bad’ cost, necessarily, as long as the organization is allocating its resources in a way that matches priorities. Some applications might be very costly but also mission-critical and/or very profitable — simply knowing the raw dollar amount an application costs to run doesn’t provide enough information about whether or not it’s ‘worth it.’
Follow Architectural Best Practices
How the application is built can have a major impact on the overall costs. Here are some general architectural best practices:
Make use of cloud provider services so you’re not managing anything you don’t have to
Reduce or eliminate traffic between availability zones and regions
Make sure applications can scale down as easily as they can scale up
Keep workloads stateless to allow using spot instances
Lock yourself in: Don’t build the application to be portable between clouds
A note on that last bullet: Not everyone sees this as a "general" architectural best practice. However, many experts see multi-cloud particularly challenging to optimize costs. Not only can it lead to high network costs, but it prevents you from using the best of breed services your cloud provider has to offer.
Ideally, organizations will follow cost best practices from the beginning. In real life, understanding these best practices and combining them with deep visibility into the cost ramifications of the different parts of the application allow teams to continually improve the cost effectiveness of applications, often without any other trade-offs.
Cost Is Tech Debt
Cost-related visibility is like any other monitoring tool — you use it to discover problems with your architecture or infrastructure, and to take action.
Cost is a technical debt, in that it points to a technical problem that will have to be addressed in the future. Unlike some kinds of tech debt, it’s generally easier to communicate the importance of addressing cost tech debt with business leaders. Nonetheless, cleaning up tech debt means engineers aren’t shipping new features to customers, so the case for addressing it needs to be made in terms that business leaders can understand.
Unnecessarily costly workloads eat into the company’s margins. Getting visibility into where those workloads are and how to fix them allows engineers to quantify how much the company is losing for every day that it’s not fixed.
CloudZero’s Kubernetes cost monitoring gives organizations the information they need to optimize Kubernetes costs and the tools to systematically address those problems. This allows teams to combat inefficient allocation of financial resources in the same way they would address other types of technical debt. In the end, teams get all the time-to-market advantages that Kubernetes promises while also ensuring costs stay low, improving the company’s bottom line.
STAY IN THE LOOP
Join thousands of engineers who already receive the best AWS and cloud cost intelligence content.