Table Of Contents
How We’re Looking At Data (And Why It Matters) Main Highlights For January 2026 1. Cost By Provider 2. Cost By Service Category 3. Cost Of AI/ML Deep Dive: How AI Is Reshaping Cloud Economics Actionable Guidance Your Takeaway For This Month

Welcome to January’s Cloud Economics Pulse, CloudZero’s monthly look at cloud spend as AI moves from vibe to prod. And this related news flash — AI spend keeps hitting new highs. pilots to production.

In last month’s Pulse, we explored the compounding effect of AI becoming part of everyday cloud operations. This month, we see that pattern harden into year-end results.

December capped a year of recalibration: provider share stabilized, compute softened, and data and AI services claimed more of the mix.

None of this happened in dramatic spikes. Cloud economics is resetting monthly — not annually.

AI is no longer a toggle. It’s being designed into systems, and its costs now an expected part of overall monthly spend rather than rather than short-term spiky variable (even if AI spend is still “spiky”).

This month’s Pulse examines how those commitments are forming — and what it means as cloud spend increasingly rewards speed, clarity, and architectural intent.

How We’re Looking At Data (And Why It Matters)

For the Cloud Economics Pulse, we track monthly cloud spend trends using anonymized, aggregated data from CloudZero’s network.

  • Cost by Provider and Cost by Service Category are shown as stacked charts, each illustrating how providers and service types contribute to total cloud spend over time. These are presented as percentages totaling 100% for each month.
  • Cost of AI/ML measures the share of AI and machine learning technologies as a percentage of all cloud spend and is presented as a line chart to highlight trend acceleration. This is presented as both average and median % of total spend.

Together, these views show not just where cloud dollars go, but how spending patterns shift as new technologies — especially AI — reshape the cost landscape.

The Cloud Cost Playbook

Main Highlights For January 2026

  • AI/ML reached a record share of spend (again), closing out 2025 as production infrastructure, not an experiment.
  • Compute softened as the stack rebalanced. Compute fell below 49% in December, while data and AI services continued to grow. It fell below 49% in December while data and AI services continued to grow — budgets shifting from raw compute toward the layers that keep AI running in production.
  • Provider dynamics normalized. AWS ended the year firmly in the lead, Azure and GCP held steady, smaller categories stabilized. Provider reshuffling is giving way to consolidation.

1. Cost By Provider

Here, we’re looking at how overall cloud spend is distributed across providers: 

The provider mix is stabilizing, with AWS remaining the default platform for most organizations.

AWS closed out 2025 at 67.9% — down from its September peak above 71%, but still 0.7 points higher than January. It remains the primary landing zone for compute, data, and AI workloads.

Azure ended December at 11%, flat month-over-month and down from 12.2% in January. The broader 2025 trend: gradual erosion rather than sustained recovery.

GCP quietly strengthened to 7% in December — its highest share of the year, nearly a full point above January. Gains were incremental, reflecting steady analytics and data adoption rather than abrupt migrations.

Outside the Big Three: normalization. “Other” ended December at 3.4%, well below early-year levels near 6%. The October spike now reads as outlier, not directional shift.

Marketplace spend continued climbing, led by AWS Marketplace at 3.3% in December — a year high. Procurement is consolidating third-party tooling into AWS, reducing vendor sprawl. Azure and GCP Marketplace activity remained volatile and small, with no sustained trend.

AI- and data-focused vendors became easier to isolate in cloud portfolios. OpenAI’s share increased steadily; Snowflake, Databricks, and MongoDB held stable positions. Anthropic registered more consistently in H2 as both adoption and attribution sharpened.

The end of 2025 looks less like a turning point and more like a settling point. AWS remains dominant despite late-year softening, Azure appears stable but subdued, GCP continues its quiet climb, and the surrounding ecosystem is coalescing into a recognizable AI-and-data layer rather than a fragmented long tail.

Key Takeaways

  • AWS still leads decisively: Finished 2025 at 67.9%, up from January despite a Q4 pullback.
  • Azure drifts lower but stabilizes: Ended the year near 11%, well below January’s level but relatively flat late-year.
  • GCP gains steadily: Reached 7.0% in December, its strongest showing of the year.
  • Marketplace spend grows selectively: AWS Marketplace hit a year-high 3.3%, signaling deeper ecosystem consolidation.
  • AI/data vendor visibility improves: OpenAI and peers remain small but consistently represented as AI usage becomes easier to attribute.

2. Cost By Service Category

Here, we’re looking at how overall spend is distributed across cloud services

By December, the service mix made one thing clear: 2025 shifted away from pure compute dominance toward the layers that support data- and AI-driven systems.

Compute ended at 48.4%, down from a late-summer peak above 51%. Usage didn’t fall — share did, as data and platform layers grew faster. December marked compute’s lowest point of the year. The stack is broadening, not shrinking.

Databases followed a different arc. After peaking at 12.9% in March, spend declined through H1 before flattening around 11% from July. December’s 11.5% suggests stabilization — and possible reacceleration as data pipelines expand to support AI.

Storage tells one of the clearest structural stories. It hovered just under 10% for most of the year, then jumped to 10.7% in September and stayed elevated, ending December at 10.4%. That step change appears durable — driven by persistent data retention, retrieval, and embedding storage needs.

The “Other” category — container orchestration, management overlays, platform services — finished December at 16.5%. After spiking to nearly 19% in April, it declined through summer before fluctuating in Q4. A reminder: platform overhead remains a meaningful and volatile share of spend.

AI/ML continued its climb to 2.5% in December, up from 1.4% in January. Still a small slice, but it outpaced nearly every other category — reinforcing the shift from pilots to production. More on AI/ML next.

Most other categories held within narrow bands. The significant shifts are happening higher in the stack, not in foundational services.

Key Takeaways

  • Compute share softens: Ended the year at 48.4% after peaking above 51% in late summer.
  • Databases stabilize: Settled near 11.5% after a steady decline earlier in the year.
  • Storage stays elevated: Held above 10% following a late-summer step change.
  • “Other” remains material: Finished at 16.5%, reflecting ongoing platform and orchestration overhead.
  • AI/ML keeps climbing: Reached 2.5% in December, setting the stage for a deeper AI spend analysis next.

3. Cost Of AI/ML

Here, we’re looking at how AI and machine learning costs are growing as a share of total cloud spend — shown as both average and median percentages to capture the full distribution of adoption across organizations:

In December, AI/ML reached its highest share of cloud spend to date.

Average AI/ML spend rose from 1.42% in January to 2.5% in December. Growth accelerated in Q4 — the sharpest jump from October’s 1.83% to November’s 2.41%. That late-year lift marks a clear shift from experimentation to production-scale workloads, where costs compound predictably.

The median tells an equally important story: AI/ML spend tripled from 0.18% in January to 0.57% in December. Month after month, the median moved upward — AI adoption is no longer limited to heavy users. It’s spreading across the middle of the market.

The widening gap between average and median suggests a familiar pattern: a long tail of heavy GPU users, plus broad-based adoption across the middle.

AI/ML is becoming foundational infrastructure. The pattern is gradual and compounding, not spiky — costs driven by sustained inference and larger datasets rather than one-off training runs. This means complex pipelines running continuously in production.

Moreover: AI rarely breaks budgets at the model layer. It breaks budgets in the supporting layers (retrieval, storage, orchestration, observability, retries) where ‘small’ costs compound into permanent run-rate.

Key Takeaways

  • AI/ML hit a new high: Average share reached 2.50% of total cloud spend in December.
  • Adoption is broadening: Median AI/ML spend rose to 0.57%, more than tripling over the year.
  • Growth is compounding: Steady month-over-month increases point to production usage, not experimentation.
  • AI spend is now durable and regular: Once embedded, these costs are persisting and expanding rather than bursty or cycling off.

Deep Dive: How AI Is Reshaping Cloud Economics

With full-year 2024 and 2025 data now available, the headline is convergence: AI is moving from pilots to normalized operating model. Across providers and services, 2025 looks less exploratory and more intentional — AI embedded in baseline architecture, not isolated initiatives.

Providers: experimentation gives way to consolidation

Overlaying provider data from 2024 and 2025 shows a clear behavioral shift. For clarity, we highlight only the Big Three and “Other” — where YoY changes were most pronounced.

First, look at AWS, isolated:

And now, Azure, GCP, and “Other” in a separate chart, so we can zoom in on those specifically:

The shift: in 2024, spend was more dispersed. AWS remained dominant, but Azure and GCP gained share as teams tested tools and architectures — broad experimentation.

That dispersion narrows in 2025. AWS stabilizes at a higher share; Azure and GCP settle into tighter ranges. Organizations are consolidating core workloads and committing to fewer platforms. Marketplace growth reinforces this — deeper investment within chosen ecosystems rather than platform hopping.

By year-end, provider behavior looks less exploratory and more intentional. Worth watching: multiple high-profile outages in 2025 raised board-level risk awareness. That could rekindle appetite for provider diversity in early 2026.

Services: AI rises, compute softens, data holds

Viewing 2024 and 2025 service categories together, three patterns stand out:

AI/ML becomes a structural driver. Marginal in 2024, it grew steadily every month in 2025 and finished at more than double its prior share — workloads moving into production and entrenching.

Compute softens without shrinking. It remains the largest category, but share gradually declines as AI workloads pull in supporting services. Spend is spreading outward rather than stacking vertically.

Databases and storage emerge as durable anchors. Databases trend downward through 2024 and early 2025, then flatten. Storage steps up midyear and holds. Together, they form a stable base — AI may train periodically, but data must be stored, retrieved, and queried continuously.

AI in context: from feature to foundation

Viewed year over year, AI/ML’s growth tells a simple story:

In 2024, AI was additive. In 2025, it became foundational. Its integration into everyday systems means costs propagate across compute, databases, and storage, reshaping the entire spend profile rather than sitting in a single line item.

What this means

Provider consolidation and service rebalancing point to the same conclusion. Organizations are done testing whether AI belongs in their architecture. The question now is how efficiently they operate it.

Compute still matters, but it no longer tells the whole story. Data layers are becoming long-term cost anchors. AI is steadily claiming a predictable share of cloud budgets.

This is the new baseline. Once it forms, the economics compound quietly.

The implication: AI costs don’t spiral because teams move too fast. They spiral when speed outpaces design. The next phase of cloud economics belongs to teams who build AI as infrastructure from day one, with costs that scale intentionally, not accidentally.

Actionable Guidance

If AI is becoming part of your baseline architecture, treat it like production infrastructure, not a special project. That means clear ownership, clear expectations, and clear guardrails from day one. These moves help you keep velocity without letting costs quietly compound.

1. Draw a clear line between ‘prototype’ and ‘production.’

Why: AI pilots linger, and lingering creates permanent spend.

Do this: Tag AI environments by lifecycle stage; apply different guardrails (budget caps, retention limits, SLA expectations).

Result: Fewer zombie workloads, cleaner path from experiment to accountable spend.

2. Measure AI cost per feature, not per model.

Why: The bill rarely maps to “a model.” It maps to what users do.

Do this: Allocate inference, retrieval, vector search, and orchestration costs to the feature that triggers them.

Result: Clearer ROI conversations; faster decisions on what to scale, tune, or sunset.

3. Audit the pipeline, then cut silent multipliers.

Why: Costs compound in surrounding layers, including — retrieval calls, embeddings, re-indexing, monitoring, retries.

Do this: Map each step in the AI workflow on a regular basis (i.e. quarterly); remove redundancy (duplicate embedding jobs, excessive refresh schedules, unnecessary model calls).

Result: Lower run-rate without slowing delivery.

4. Put lifecycle policies on AI data: keep, tier, delete.

Why: Storage and databases become cost anchors when datasets grow without cleanup.

Do this: Set default retention windows and tiering rules for training data, logs, prompts, and embeddings.

Result: Storage stops creeping upward month after month.

5. Optimize inference like a performance problem.

Why: As AI goes production, inference scales with users, and inefficiency scales with it.

Do this: Add caching, batching, and smaller context defaults. Treat prompt and context size like payload optimization.

Result: Lower cost per request and fewer surprise jumps as usage grows.

6. Run a monthly ‘provider drift’ review tied to architectural intent.

Why: 2025 showed consolidation, but drift still happens. And it’s expensive when accidental.

Do this: Review provider share changes alongside the “why” (new region, new service, acquisition, AI rollout).

Result: Fewer unplanned migrations; cleaner multi-cloud strategy tied to business priorities.

Your bottom line: AI spend doesn’t surprise teams just because usage grows. It surprises them when experimentation or production runs without clear ownership, expectations, visibility, and guardrails. Pipelines, data, and product adoption do compound costs, but disciplined governance and clear allocation are what keep both AI testing and production aligned to real business value without slowing growth.

Your Takeaway For This Month

The cloud isn’t getting cheaper or simpler. It’s getting more opinionated. AI is hardening into the stack, data is becoming the long-term cost surface, and the days of treating either as “variable” are fading. The advantage goes to teams that design for this reality early — making fast calls, revisiting architecture often, adjusting spend with the same cadence they ship features. In 2026, cloud economics rewards decisiveness, not caution.

In short: Cloud economics is no longer about saving money — it’s about choosing where cost is allowed to compound.

Thoughts, comments, disagreements? Reply to this Pulse or email [email protected] with “CEP” in the subject heading. We’ll feature the best feedback in an upcoming issue. Watch for our next Cloud Economics Pulse on February 10, 2026, and on the second Tuesday of every month.

The Cloud Cost Playbook

The step-by-step guide to cost maturity

The Cloud Cost Playbook cover