Table Of Contents
Amazon EKS Best Practices For Optimizing Costs 1. Make Sense Of Your AWS Kubernetes Bill First 2. Activate Cluster Autoscaling 4. Rightsize Your EKS Setup 5. Use Spot Instances Over On-Demand Instances 6. Use AWS Fargate With EKS 7. Optimize Your Setup’s Resource Requests What Next: Optimize Your Amazon EKS Costs The Easy Way

Amazon Elastic Kubernetes Service (EKS) eases deploying and running Kubernetes on the AWS platform. A fully managed Kubernetes service, EKS eliminates the need to install, configure, or maintain Kubernetes nodes or control planes on your own.

With EKS, you can leverage the performance, scalability, and availability of AWS infrastructure, along with integrations with multiple AWS compute, storage, security, serverless, and networking services.

The problem is that it is quite easy to overspend on Amazon EKS if you don’t know how to optimize your Kubernetes costs. So, what can you do to reduce your EKS costs when running containers or microservices on AWS?

Amazon EKS Best Practices For Optimizing Costs

Cost optimization in AWS is a continuous process of refining and improving the cost of running a workload over its lifespan.

AWS reduced the cost of running Kubernetes clusters on EKS from $0.20/hour to $0.10/hour in January 2020. That’s a good thing. Yet you do not want to rely on factors outside your control, like AWS reducing Amazon EKS pricing, to improve your margins.

Instead, the best practices in this post will help you design and run cost-aware processes that drive business results while minimizing costs so you can maximize your return on investment.

The Cloud Cost Playbook

1. Make Sense Of Your AWS Kubernetes Bill First

Understanding your Kubernetes costs are often an enormous challenge when trying to optimize them. There are several AWS-provided, as well as third-party, Kubernetes cost monitoring tools to help you here, including AWS Cost Explorer and CloudZero.

While Cost Explorer helps you get an overview of your K8s costs, platforms like CloudZero offer more granular Kubernetes cost insight, such as cost per cluster, namespace, pod, customer, team, environment, and more.

Shine A Light On Your Kubernetes Spend

You can more easily decide where to cut expenses once you have a detailed understanding of your Kubernetes costs.

2. Activate Cluster Autoscaling

AWS autoscaling allows you to add or remove nodes or pods as your workload changes. This feature enables you to adjust your cluster size so that your workload can run as efficiently as possible.

Autoscaling can be horizontal or vertical. Horizontal autoscaling refers to adding and dropping nodes or pods by increasing their replica.

Conversely, vertical scaling refers to modifying allocated resources like the CPU and memory of each node in a cluster. It usually involves setting up an entirely new node pool with machines that have varying hardware specs.

With vertical scaling on pods, resource requests and limits dynamically adjust in response to your current application requirements.

By using a tool like Cluster AutoScaler, you can automatically adjust the size of your K8s cluster when some pods fail or some nodes have been idle for an extended period — including moving their pods to the active nodes. In addition to reducing your Kubernetes spend, reducing idle nodes improves performance.

3. Get Creative With Your Instances

The goal here is to get the best price-performance ratio. You’ll want to select different instance types here because some can be less expensive but may not meet the high-throughput, low-latency demands of your workload.

If your workload is light, you can often pick the cheapest instance for it. Alternatively, you could choose fewer machines with higher specs. More nodes mean a little higher overhead, so this reduction can help lower your Kubernetes bill.

But there’s something else. A mixed-instance setup means that each instance has its own set of resources. So, even though you can use metrics like CPU and network utilization to scale up autoscaling groups, you may get inconsistent metrics.

Again, Cluster Autoscaler can help here by enabling you to blend instance types in one node group. You’ll just need to ensure your instances have the same CPU and RAM capacity.

4. Rightsize Your EKS Setup

Rightsizing in AWS refers to matching computing resources to the requirements of your application in a way that eliminates waste (overprovisioning) or underutilization (performance degradation).

You can use AWS Cost Explorer Resource Optimization to identify EC2 instances that are underutilized or running idle. Then you can decide to terminate or downsize these instances to save costs.

If you are not sure where to begin, a tool like CloudZero Advisor can help you pick the best instances types, sizes, and advice you on pricing, for your specific workload, AWS service, and more.

Also:

  • You can automatically end instances using AWS Instance Scheduler.
  • AWS Operations Conductor can also automatically resize EC2 instances. Both Instance Scheduler and Operations Conductor use Cost Explorer recommendations.
  • AWS Compute Optimizer can also provide recommendations for instance types that cannot be downsized. It can provide you with recommendations for downsizing across or within instance families.

5. Use Spot Instances Over On-Demand Instances

Spot Instances are an AWS resource purchase option that lets you use surplus compute capacity to run your workload at a discount of up to 90% off the on-demand price.

It is crucial to note that AWS may need this capacity back at any time. It is recommended that you only use this option for workloads that aren’t too sensitive to disruptions since AWS gives only a 2-minute notice before taking back the Spot capacity.

A top Spot Instance best practice you can use here is to set up a spot capacity pool. A spot capacity pool enables you to provision cheap compute capacity across many pools of Spot Instances.

The reasoning here is that even if AWS reclaims the surplus capacity, it will do so across some Spot Instances or availability zones at a time — not the entire pool at once. Thus, you won’t need to switch to costly On-Demand instances or experience downtime.

You can also configure Spot Instance groups (AWS Spot Fleets). An AWS Spot Fleet is a group of instances with the same availability zone, type, operating system, and network configuration. Then you can order different instance types at the same time to further improve your price-performance ratio. Rather than paying for a specific spot pool, you can get the maximum hourly rate for the entire fleet.

For the benefits of this EKS cost optimization best practice, you will have to do a little extra setup, configuration, and maintenance work.

6. Use AWS Fargate With EKS

Here’s the deal. You can run Kubernetes without managing clusters of K8s servers with AWS Fargate, a serverless compute service.

AWS Fargate pricing is based on usage (pay-per-use). There are no upfront charges here as well. There is, however, a one-minute minimum charge. All charges are also rounded up to the nearest second. You will also be charged for any additional services you use, such as CloudWatch utilization charges and data transfer fees.

As opposed to Amazon Lambda, Fargate gives you the option to use different runtimes with EKS or ECS. It is also cheaper to execute Fargate than Lambda per hour. For more information, see our AWS Fargate vs AWS Lambda comparison.

Furthermore, Fargate can reduce your management costs by reducing the number of DevOps professionals and tools you need to run Kubernetes on Amazon EKS.

7. Optimize Your Setup’s Resource Requests

Kubernetes uses resource requests to set the load on CPU and memory resources. These requests often reserve compute resources on working nodes.

However, there is often an excess reserve, also known as slack, between requested and used resources.

As slack increases, more resources are required, resulting in higher costs. However, you can use Kubernetes Resource Report to view excess resources, including identifying specific areas where you can lower resource requests to save more.

What Next: Optimize Your Amazon EKS Costs The Easy Way

It’s tricky to control your Kubernetes budget. But is possible when you implement these EKS cost optimization best practices.

Yet, as we frequently advise our clients, you don’t want to reduce EKS costs indiscriminately. Achieving optimal ROI requires balancing cost reduction and performance optimization. This requires a clear, granular understanding of your Amazon EKS costs.

CloudZero lets you view your EKS costs based on the business dimensions that matter to you. Cloud’s Cloud Cost Intelligence approach breaks down Kubernetes costs by the hour, cluster, namespace, and pod.

Kubernetes Complete Cost Control

While other cost optimization tools only provide averages and totals, CloudZero provides more actionable cost intelligence, like cost per customer, product, software feature, environment, team, and more.

Transform Your Cloud Spend

This granularity allows you to identify exactly what drives your K8s costs, helping you tell exactly where you can cut costs without sacrificing system performance.

Using CloudZero, you can combine your containerized or non-containerized workloads in one place and still view your K8s costs by microservice, environment, etc.

In addition, CloudZero cost anomaly detection tracks your Kubernetes cost trends in real-time. This will alert you whenever you are approaching your pre-set EKS budget limit so you can take action before you overspend.

Interested in finding out how much CloudZero can save you on Amazon EKS?  to get started!

The Cloud Cost Playbook

The step-by-step guide to cost maturity

The Cloud Cost Playbook cover