Table Of Contents
How DLT Compute Tiers Work The CloudZero Approach: Gauging Usage To Need Real Results: Saved $40,000 Key Takeaway: Match Compute Tier To Actual Need Final Thoughts

Databricks is a critical part of many organizations’ tech stacks, facilitating analytics, machine learning, and other leading-edge data engineering tasks. But when a service like Databricks becomes essential, it also tends to become a cost black hole, leading engineering teams to a quandary: How can you keep Databricks costs in check without hurting application performance?

At CloudZero, we give organizations unparalleled visibility into their Databricks costs. Recently, we uncovered a key opportunity to optimize Databricks spending by analyzing how teams use Delta Live Tables (DLT) compute tiers.

How DLT Compute Tiers Work

Databricks offers three DLT compute tiers, each designed to support different levels of workload complexity:

  • DLT Core: $0.20/DBU (base tier)
  • DLT Pro: $0.25/DBU (25% more than Core)
  • DLT Advanced: $0.26/DBU (80% more than Core)

In principle, your workload complexity should determine which tier you choose. Core would suffice for a simple workload; a more sophisticated workload would require Pro or Advanced.

But organizations don’t always think this way. Faced with tiered options like this, organizations often equate “expensive” with “premium,” inferring that the most expensive option is the most highly performant option. But this isn’t the case. If your compute workload isn’t complex enough to warrant it, buying Advanced DLT compute instances is like renting a warehouse to store the contents of a studio apartment.

The CloudZero Approach: Gauging Usage To Need

Price isn’t the only difference between DLT compute tiers. Each one suits a particular level of workload complexity:

  • DLT Core Compute is ideal for straightforward pipelines with simple transformations and low data volume
  • DLT Pro Compute adds more robust capabilities and can support moderately complex jobs
  • DLT Advanced Compute is built for highly complex, large-scale pipelines that require features like Change Data Capture (CDC), enhanced orchestration, and broader SLA capabilities

By correlating Databricks spend with job-level metadata (e.g.: task duration, pipeline structure, transformation complexity), CloudZero can assess which DLT compute tier is right for a given customer. Often, Advanced compute can be refactored to either Core or Pro — immediately reducing DBU costs without sacrificing reliability or performance.

Real Results: Saved $40,000

This insight came to bear on our work with a B2B data provider. This customer had defaulted to running multiple production pipelines on DLT Advanced Compute. CloudZero investigated this trend to see if their workload complexity warranted Advanced Compute, and determined that several of those pipelines could safely run on DLT Core or Pro tiers.

After reconfiguring those pipelines, the customer reaped an estimated $40,000 in annual savings — all without impacting pipeline performance or stability.

Key Takeaway: Match Compute Tier To Actual Need

Selecting the right DLT Compute should be a performance-informed decision. It’s not about how much you can deploy; it’s about how much you need to deploy. Teams should regularly audit:

  1. Transformation complexity: Are you doing basic filtering and joins, or running multi-stage applications and real-time enrichment?
  2. Pipeline criticality: Do these pipelines require strict SLAs, enhanced monitoring, or enterprise-grade orchestration?
  3. Execution patterns: Are jobs batch-based and predictable, or dynamic and streaming?

If your answers lean toward simplicity, DLT Core or Pro will likely meet your needs — and cost significantly less over time. Important to remember: Choosing a less costly tier does not inherently mean sacrificing performance.

Final Thoughts

CloudZero enabled this insight by mapping usage patterns directly to cost structures. Every engineering team should be equipped to make these kinds of decisions, and granular visibility is the only way to make it happen. Cloud cost optimization starts, and depends most heavily on, informed engineers.

The Cloud Cost Playbook

The step-by-step guide to cost maturity

The Cloud Cost Playbook cover