Table Of Contents
The Structural Shift What Leadership Teams Need to Internalize Where This Leaves Leadership Teams

Every company adopting AI is facing the same problem: the cost of AI adoption in products, in operations, and especially in engineering is accelerating with no alignment between spend and value.

The competitive pressure is real. Companies that don’t invest in AI will be displaced by those that do. But the investment itself is becoming inscrutable. Leadership teams can’t explain where the money is going, whether it’s working, or what it costs to deliver a specific outcome to a specific customer.

That’s the paradox: you must invest, but the investment is structurally ungovernable.

$2.52T
in global AI spending in 2026 — up 44% year over year
28%
of AI use cases in I&O meet ROI expectations
20%
report actually growing revenue through AI
The Paradox

You have to spend more, not less. But you have to be able to compare what you’re spending to the value you’re getting for it — per customer, per feature, per agent task. The companies that pull back will risk obsolescence. The ones that lean in with that clarity will build the moats that matter.

The AI era won’t punish high spend. It’ll punish blind spend.

The Structural Shift

When AI agents are priced per resolution or per work unit, the pricing looks legible, scaling with usage like it did with cloud. The deeper, underlying problem is that the cost to produce each resolution varies enormously based on AI effort. One customer ticket takes a simple lookup. Another requires deep reasoning, multiple tool calls, and three model invocations.

The unit price is the same. The cost to deliver it can differ by 100x. That’s the layer that’s inscrutable and it’s shared across customers, features, and workflows in ways that make allocation genuinely hard.

Salesforce delivered 2.4 billion Agentic Work Units last quarter — $800 million in Agentforce ARR — then watched its stock plunge 26% when CIOs couldn’t explain what those units actually cost. Intercom charges $0.99 per resolution and has scaled past $100 million in ARR, but sustaining that margin requires knowing cost-per-resolution across every customer segment and issue type. Zendesk, Microsoft, Workday, and ServiceNow are all converging on the same pattern: define a unit of AI work and price on consumption.

The same dynamic is playing out in software development. Engineering teams adopting coding agents are generating AI spend that scales with the ambition of what they ask the agent to do — not with headcount, lines of code, or any metric leadership currently tracks. A developer using an agentic coding tool can generate thousands of dollars in inference costs in a single session. Multiply that across a team, across multiple tools, and the R&D budget starts moving in ways no one forecasted.

In both enterprise platforms and engineering tools, the problem is the same. Activity-based metrics just measure usage. They can’t connect to value. Was an AWU worth it? Was that $3,000 coding session productive? It depends entirely on what the “work” delivered. Allocation solves this by reframing cost in terms the business already understands: cost-per-customer, cost-per-service, cost-per-delivery, cost-per-release. Those are units a CFO can relate to revenue and margin. “AWUs consumed” and “tokens used” are not. Activity metrics will never be good enough for businesses to answer the question that actually matters: what return was realized on that agent investment, and did the value justify the cost?

Allocation is the technique that made cloud spend legible; attributing shared infrastructure costs to teams, products, and customers until the numbers meant something to the business. The same technique applies to AI, but the problem is harder. Cloud costs correlated roughly with traffic — more users, more compute. AI cost scales with curiosity. A classification call and a deep reasoning chain can differ by 100x on the same model, for the same customer. The more ambitious the use case, the less predictable the cost, and billing data doesn’t tell you which feature, which customer, or which intent drove it.

And underneath all of this: current token prices are subsidized. Providers are burning capital to establish position. That’s the same pattern cloud went through a decade ago. 

Every CFO knows a reckoning is coming. When it does, the companies that understand their cost-per-unit will negotiate from knowledge. Everyone else will absorb whatever pricing the providers set.

FinOps In The AI Era: A Critical Recalibration

What 475 executives told us about AI and cloud efficiency.

What Leadership Teams Need to Internalize

1. Developer AI spend is the fastest-growing ungoverned budget line

Coding agents are converting salaried engineering work into variable AI spend. Cursor’s shift to usage pricing sent developers from $28/month to $500 in three days. Engineering organizations now have 15–30% of R&D flowing to AI across multiple tools with no unified view or cross-tool attribution. The spend is real, growing fast, and unmanaged.

2. One question separates survivors from casualties

Every billing dashboard answers: how much did we spend on AI last month? The question that matters: What does it cost to serve Customer X on Feature Y using Model Z, and is the margin sustainable? 

That requires joining customer identity, product feature, model, and pricing tier, then reconciling against contract rates that differ 20–40% from list price. It’s a multi-dimensional allocation problem, and every CFO will face it.

3. Control towers are shipping without a cost layer

Enterprise AI governance is taking shape fast, in identity, operational controls, security, developer platforms. But the systems being built to govern what agents do have no visibility into what agents cost. Operational telemetry and financial telemetry are different data, owned by different systems, on different timelines. Until they’re joined, governance decisions are being made without knowing their economic consequences.

Where This Leaves Leadership Teams

Pulling back on AI isn’t an option. The competitive pressure there is very real. So the question every leadership team faces is whether they can connect what they’re spending to what they’re getting, or whether they’ll keep funding AI the way most are now, swapping headcount for inference costs and calling it a strategy.

The companies that establish that visibility, whether built, bought, or cobbled together, will make AI investment decisions with confidence. The ones that don’t will keep making them on intuition, and intuition doesn’t survive a board meeting where the numbers don’t add up.

FinOps In The AI Era: A Critical Recalibration

What 475 executives told us about AI and cloud efficiency.