When you’re operating in the cloud, making the right decisions is not always easy because there’s a lot of ground to cover, especially with regard to cost. The elastic nature of cloud infrastructure means your costs could quickly spiral out of control if you don’t have guardrails in place to help keep costs down.
Properly managing your cloud costs is important, especially for SaaS companies. Cloud spend impacts your COGS, which in turn affects revenue and valuation. If you’re struggling to keep costs under control, you could be making one or more of the critical mistakes below.
Cloud Cost Mistakes You May Be Making In Your Business
1. Not maximizing your savings plans and reservations
This is particularly important for EC2 compute but it also holds true for all services that support reservations. Some businesses may think they are too small to worry about savings plans and reservations, but they are simply leaving money on the table by not optimizing these instances.
If you don’t have the time to optimize those resources or are worried that your compute needs might shift over time, consider using a platform like ProsperOps to automate the process. In most cases, the potential savings will considerably outweigh the vendor’s fees.
2. Not exploring Spot Instances
Spot offers 60% savings over on-demand instances. If you already use elastic compute that is managed by auto-scaling groups, then you should consider Spot Instances.
3. Making cloud savings the finance team’s responsibility
Understandably, many organizations hear the terms “savings” and “cost optimization,” and assume they’re the responsibility of the finance team.
But this isn’t the whole story.
In fact, it’s missing the most important character.
While finance certainly has an important role to play in the cloud cost-cutting process, the most impactful and durable savings actually come from engineers.
Because engineers have the most contextual knowledge about how your cloud infrastructure is set up.
Using a cloud cost intelligence platform, like CloudZero, your engineering team can see the cost consequences of their building decisions — and discover which business units are running more (and least) efficiently.
With this context, engineers can go optimization hunting: identifying the resources that aren’t delivering value for money, and handling them in a way that reduces cost and preserves efficiency.
4. Not converting to GP3, especially for elastic compute
Amazon often makes improvements to its offerings and introduces new versions, such as the GP3. But most companies usually have EC2 storage also known as EBS volumes, such as the GP2.
When businesses have this infrastructure as code, it automatically spins up EC2 instances with an attached volume as part of the elastic infrastructure.
A big mistake is leaving this elastic infrastructure as is, and not migrating to the newer GP3. You’ll need to do a bit of testing before migrating, and, while you may not save huge amounts by doing so, you’d be losing money if you don’t.
5. Not exploring intelligent tiering for S3 buckets
If you use a lot of S3 storage, explore intelligent tiering, which is Amazon’s way of helping you save money on buckets. With S3, there are different tiers of storage based on objects’ size, how long you stored the objects during the month, and the storage class. There are also frequent and infrequent access tiers.
When you leverage intelligent tiering, your information will automatically migrate across tiers depending on your usage, which helps you save.
6. Not setting lifecycle rules for storage
It’s best practice not to deploy storage-based volumes or S3 buckets without minimum lifecycle rules. This is because if you don’t set life cycle rules, your storage continues to grow linearly, and no information is ever removed from the bucket.
Paying for that storage could get significantly expensive over time. So, it is important to define and implement life cycle rules for different types of storage based on your usage patterns.
7. Not lowering snapshot life cycle
Snapshots are stored for a minimum of 35 days in RDS, which is generally longer than a lot of people need to keep snapshots.
If you leave the default, your RDS snapshots store for 35 days and could become expensive. You should lower the life cycle to seven or 14 days, which in turn will lower the cost of your RDS storage.
8. Failing to appreciate the value of unit cost metrics
Most cloud cost tools were designed to give you visibility into how much you’re spending on a particular resource class.
For example, you might be able to see that 80% of your monthly cloud spend goes to EC2.
To the untrained eye, this might seem alarming.
EC2 is clearly the biggest slice of your cloud spend, and when it comes time for cost-cutting exercises, you might naturally target it.
But without the context afforded by unit cost metrics, the 80% figure really gives you no information.
It doesn’t answer questions like:
- Which of our features or products uses EC2 most heavily?
- Is one of our customers responsible for 45% of that cost?
- What’s the best way to reduce that cost without sacrificing the quality or performance of our products?
Solid, reliable cloud unit metrics are the only way to answer these questions.
CloudZero uses CostFormation, a homegrown domain-specific language, to allocate 100% of your cloud spend to the units that matter most to your business.
That could be cost per customer, per engineering team, per transaction — or whatever is most important to you.
9. Provisioning optimized volumes when you don’t need them
Companies are often quick to provision optimized volumes because they want to guarantee performance, but they frequently end up not using them. Unfortunately, optimized volumes are usually more expensive than non-optimized ones.
Before provisioning an optimized volume, it’s important to test your use case and see if newer volumes like the GP3 would suffice. That’s because GP3 volumes now have a lot more configuration options — like burstable capabilities, for instance — than the GP2.
Managing and monitoring expensive storage should be a hygiene type of activity recommended for all organizations.
Establish a culture of going with GP3s, unless you’ve done some testing and confirmed you need the extra throughput or performance. If your engineers are requesting or using expensive volumes, it is important to ask if you really need them, and test if a GP3 would be more appropriate.
10. Paying for extra cloud services you don’t use
Organizations sometimes turn on cloud management services like CloudTrail and GuardDuty at the request of their information security team. But these services can get expensive.
It’s one thing if you find them helpful. However, if the only reason you’re paying for them is because your InfoSec team wants them on, then a good way to bring context to the expense is to annualize that cost. For example, a $20,000 monthly cost is $260,000 per year. Present that information to your InfoSec team so they can defend the expense.
Gain Full Visibility Into Your Cloud Costs With CloudZero
Accurately tracking and managing cloud costs can feel like an insurmountable challenge. But it doesn’t have to be that way.
CloudZero is a cost intelligence platform that gives you complete visibility into your cloud costs and allows you to organize your cloud spend into relevant business dimensions — whether your tags are perfect or not.
With CloudZero, you can clearly visualize your cloud spend, even for containerized and multi-tenant infrastructure like Kubernetes. CloudZero brings the important metrics to your attention so you can take relevant action. to learn more about how CloudZero delivers valuable insights into your cloud spend.