The word “Trillion” never fails to set the tech world on fire.
Foundation Capital’s Jaya Gupta and Ashu Garg are two of the most recent firestarters. Late in December, they co-wrote “AI’s trillion-dollar opportunity: Context graphs,” outlining how AI will transition from organizational knowledge to organizational comprehension. It needs to know more than the final decision a human made, it needs to know every step of reasoning that everyone involved in the decision took to reach that final decision.
Their example: applying a higher-than-normal renewal discount. The “context graph” that led to the elevated discount would map each step of the edge case — who made what decisions, when, based on which data — effectively creating a Why plot that AI could autonomously apply in future renewals whose context matches.
I love this. Context graphs will make AI decisively more useful, and any business not thinking about how to implement them is already behind. The companies that get this right early will build durable advantages that late movers won’t be able to close easily.
There is, however, one thing conspicuously absent from the Foundation Capital piece, and from most of the breathless coverage of agentic AI in general: a financial governance layer. Context graphs will radically expand what AI can do autonomously — including how it spends your money. And right now, most organizations have no systematic way to hold it accountable for those decisions.
That’s not a reason to slow down. It’s a reason to build the right infrastructure now, before you get sunk by well-intentioned but rogue AI spending on your behalf.
The Accountability Gap
Human accountability runs on a simple model: decision; outcome; consequence. We know how fast humans move, what damage they can do in a given period, and what’s at stake. That knowledge lets us design consequences that manage risk proportionally. A human who spends $1M on a problem that should have cost $100 can be disciplined, retrained, or fired.
You can’t fire an AI. You can’t make it feel the financial impact of a bad call. And if context-graph-fueled AI applies a renewal discount that tanks your margins across a customer segment, there’s no performance review coming. There’s just the bill.
You can’t fire an AI. You can’t make it feel the financial impact of a bad call. And if context-graph-fueled AI applies a renewal discount that tanks your margins across a customer segment, there’s no performance review coming. There’s just the bill.
This is the accountability gap. As AI takes on more autonomous financial decision-making — discounts, ad spend, infrastructure provisioning, vendor selection — the gap between what it can do and what it should do becomes an existential business risk. The organizations that close this gap early will be the ones that look back on the AI era as a controlled experiment in compounding advantage. The ones that don’t will face a perilous pivot: Growth stalls, margins collapse, and they scramble to cut what they should have been measuring all along.
Don’t be in the second group.
Context graphs, and any other system that expands AI’s autonomous decision-making radius, need a financial counterpart: an economics barometer that connects every AI-driven investment to its business outcome in real time.

Research Report
FinOps In The AI Era: A Critical Recalibration
What 475 executives told us about AI and cloud efficiency.
What An Economics Barometer Actually Is
This isn’t a dashboard. It’s a living governance system built around a unit cost metric: the most granular, business-relevant way to express what your AI is actually spending, and on what.
For a food delivery app, that’s cost per order. For a SaaS company, it’s cost per customer or cost per active user. For an AI-native product, it might be cost per inference or cost per successful outcome. The unit cost metric is the lens through which every AI investment becomes legible, not as a line item on a cloud bill, but as a signal about whether the business is becoming more or less efficient at delivering value.
Around that unit cost metric, you build what I’d call an Efficiency Level Objective (ELO). This comes directly from the SRE playbook, where engineers have long managed reliability risk through error budgets. The ELO defines how long a system is allowed to operate above its target unit cost before intervention is required. Operate under budget long enough and you earn back “tolerance,” a credit that gives your teams room to experiment. Blow past the threshold and the system triggers alerts, escalations, or automated corrective actions.
The ELO framework makes the economics barometer dynamic rather than static. Instead of a fixed budget that you either hit or miss at the end of the month, you have a continuous, real-time signal that tells you:
- What your AI is spending, per unit of business value
- Whether that spend is trending toward or away from your target
- How much tolerance you have left before you need to intervene
- Whether last quarter’s efficiency gains have earned you runway to take bigger bets this quarter
This is the infrastructure that makes context graphs safe to deploy at scale. Not by constraining AI’s decision-making, but by making its financial consequences visible and actionable in real time.
The Cost Of Not Building This
Here’s what happens if you don’t.
Your context graphs go live. They work — impressively, at first. AI applies learned reasoning to edge cases faster than any human team could. Renewal discounts get extended more generously; ad bids get optimized more aggressively; infrastructure gets provisioned more dynamically. The efficiency gains are real.
Then, six weeks later, a $200,000 bill arrives that nobody saw coming. The AI had been learning to optimize for the metrics you gave it, which weren’t the right ones. Cost per decision looked fine. Cost per unit of business value had been deteriorating for weeks, invisibly, because nobody was measuring it.
This is not a hypothetical. It is the early-cloud story, replayed at AI speed. Companies that didn’t build cost observability into their cloud architectures early spent years unwinding the consequences, by rightsizing, rearchitecting, and explaining to boards why a technology that was supposed to save money became their fastest-growing cost center.
AI is that story, compressed. The learning curve is steeper, the decisions more autonomous, the billing more opaque, and the speed of compounding faster in both directions. Get the economics barometer right and the compounding works for you. Get it wrong and you’ll be managing a crisis instead of a competitive advantage.
Build It Before You Need It
An economics barometer isn’t a constraint on AI ambition. It’s what makes AI ambition sustainable.
You can’t make AI feel the weight of a bad investment. You can’t fire it when it blows the budget. What you can do is build a system that makes the financial consequences of every AI decision visible, measurable, and correctable — before the stakes get high enough to hurt.
Context graphs are coming whether you build the governance layer or not. The question is whether you’ll be watching when they start spending.

