Table Of Contents
Reason 1: DevOps Can Proactively Control Cloud Costs Reason 2: The Cloud TMI Struggle Is Real Reason 3: Avoiding Development Slowdowns Getting Real On DevOps Cloud Cost Optimization

If you’re a cloud architect or engineering lead, chances are you’ve had a defensive conversation with finance about the AWS bill. Maybe it looked a little something like this:

Unfortunately, this scenario is all too familiar, yet understandable from Finance Frank’s point of view. He’s just trying to do his job, but has zero context into which engineering activities are costing the organization so much (or why these costs are variable on a month-to-month basis). That makes the conversation between App Owner Amy and Frank adversarial, when it doesn’t have to be. Frank should be thinking about profit margins and capital allocation, rather than learning about reserved instance (RI) distribution, or the nuances of AWS Fargate pricing. The responsibility of cloud cost optimization should not fall on finance alone. Here’s why:

Reason 1: DevOps Can Proactively Control Cloud Costs

The public cloud model has made buyers out of engineers, whether or not they’ve realized it. Traditional tactics for IT budgeting and cost management are antithetical to building on AWS. Think about it: With AWS, the ability to scale is infinite, so the old server-bound parameters on engineers have been removed. It’s like going to a restaurant and choosing a meal from a menu with no prices. Engineers will pick the filet mignon of AWS services every time (and are sometimes being charged by the bite). They historically haven’t had the data to make a better, cost-informed choice.

Here’s an all too common scenario: Finance consults an outdated cloud cost management solution or monthly invoice, and asks DevOps leadership to login to AWS to investigate the root cause of a cost spike. Often DevOps leaders don’t have direct answers to finance’s questions, so have to ask the engineering product owner what’s happening within their application. Cloud cost anomalies could be a result of any number of engineering actions or simple mistakes. The process of tracing these actions today is inefficient at best, and extraordinarily wasteful at worst.

Rather than a reactive approach, the script should be completely flipped. DevOps should have complete, real-time visibility into the cost of engineering and infrastructure decisions. Cost should be a first-class operational metric and major part of the day-to-day engineering workflow. Instead of playing detective, DevOps leadership can proactively report costs to finance, justify the cost of actions, and project the long-term cost of applications and projects in the pipeline. That way, instead of defensively approaching DevOps teams about spend, finance and executives can focus on setting the right incentives to innovate, while reducing waste.

finops-automation-series-thumbnails

Reason 2: The Cloud TMI Struggle Is Real

Too much information (TMI) is a very real cloud cost optimization problem. Cloud bills and billing tools are loaded with data that isn’t necessarily relevant to the person trying to interpret what actually went down in any given month. In some cases, a cloud manager is attempting to control costs by adopting AWS budgeting features, but in reality is taking educated guesses around unpredictable expenses like bandwidth, support, RIs, and more. Right now, there’s TMI for anyone to make an informed decision, and not enough targeted information for specific levels of decision-makers.

Instead, different decision-makers should only be able to see cost data that’s relevant to their jobs. For example:

  • Finance: Should have global awareness into the real cost of their systems. This data will provide business context to understand the most and least profitable, innovative, and efficient applications and features.
  • Operations: Should have real-time data on the cost of cloud operations, as well as anomaly detection capabilities to investigate the cost of individual infrastructure resources as incidents happen.
  • Developers: Should have data that correlates cost with the performance of their systems, applications, and features. This data will allow them to build better, cost-efficient systems, and fix any issues with their code in real-time.

Reason 3: Avoiding Development Slowdowns

Few people understand that access to real-time cost data can actually increase the pace of development. There’s a common misperception among developers that building with cost in mind will slow them down. Instead, retroactively chasing down issues when finance raises the alarm on cloud costs is one of the single biggest (yet rarely discussed) operational inefficiencies today.

Development organizations that embrace cost as an operational metric can innovate faster and build more efficient systems. By siphoning cost data into tools DevOps teams already use, like Slack and OpsGenie, developers can observe their applications and fix potentially costly mistakes in real time. Rather than building within nearly unlimited parameters of cloud scalability, developers can learn to use just the capacity they need (and curb unnecessary waste). In time, efficient systems can create found money for the engineering budget that can be reinvested elsewhere (like hiring engineers for serverless transformations).

Getting Real On DevOps Cloud Cost Optimization

Giving the right data, to the right people, at the right time can help DevOps teams avoid defensive conversations with finance, and give finance the information they need to do their jobs. Instead of the buck stopping with finance (pun intended), the responsibility for cloud cost optimization should also fall into the hands of cloud architects and DevOps leaders. All they need are the right tools. To learn more about CloudZero’s cloud cost optimization capabilities, get started here.

The Modern Guide To Managing Cloud Costs

Traditional cost management is broken. Here's how to fix it.

Modern Cost Management Guide