Efficiency is the floor, not the ceiling

PwC published its AI Performance Study this month. 1,217 executives surveyed, 25 sectors covered. The number that should travel further than it has: 20% of companies capture 74% of the economic value of AI.

That is a Pareto distribution sharper than almost anything else in business performance. Sharper than venture returns. Sharper than software platform economics. And the part the report digs into — what separates the 20% from the 80% — is not what most people assume.

It is not budget. The 20% are not spending more in absolute terms. It is not stack. The model contracts and infrastructure choices are roughly similar at the top end. It is not even talent in the headline sense; both groups recruit from the same pool.

The difference is what the company uses AI for. The 80% automate what they already do. The 20% sell what they could not sell before.

Efficiency is the floor

A statistic display: 20% of companies capture 74% of the economic value of AI, with a horizontal bar showing 74% in dark and 26% in light.
The split is not budget. Not stack. It is what the technology is used for.

The 80% are not wasting money. They are doing something rational and limited. They have AI, they apply it to existing processes — customer support tickets, invoice extraction, sales-email drafting — and they measure the resulting cost reduction. The numbers usually look fine in a quarterly review. Three percent savings in operations, six percent in support, two percent in legal review.

The trap is that those savings are the floor of what AI can do, not the ceiling. The companies that stop at automation are doing the same thing they did before, slightly cheaper. They have not changed the shape of their business. They have not opened a new market. They have not built a product that did not exist a year ago.

In a year, that 3-6% savings is the new baseline. The next quarter, competitors that did the same thing erase the relative advantage. By the end of year two, the company is back where it started — same shape, same revenue, slightly lower cost — except now they have to maintain an AI stack on top of everything else. Efficiency without new value is technical debt that produces a brief quarter of relief.

The 20% are doing something different at the foundation. They are using AI as a wedge into adjacent markets, as a way to charge for work they could not previously deliver, as the underlying substrate of new products. The McKinsey 2025 report noted the same pattern in different language: the high-performing cohort is twice as likely to be using AI to enter new business lines, not just optimize existing ones.

The 20% are building new products on top of the API

The clearest example of this dynamic in real time is Anthropic's revenue trajectory.

A trend line showing Anthropic going from $9 billion run-rate to $30 billion run-rate in one year, with a caption stating customers were not saving on emails but building new products on top of the API.
Customers did not get there by saving on emails. They built things on top of the API.

Anthropic moved from a $9B run-rate to a $30B run-rate in twelve months. That is not the kind of growth that comes from incremental efficiency improvements in their customers' workflows. If every Claude customer saved 5% on something through automation, Anthropic's revenue would have grown maybe 30-40%. Not 233%.

What happened is that customers built new products on top of the API. Coding agents that did not exist before, charging $200/month per developer. Research assistants charging $500/seat to enterprise. Customer-support agents that companies are now reselling as a feature of their own product. The API is upstream of a wave of new products, and Anthropic captures a sliver of each one.

The PwC number is the same dynamic at the customer level. The 20% are building those new products. The 80% are using Anthropic to write internal emails faster.

Governance is the unsexy multiplier

The PwC report points to four pillars that the 20% share. They are not surprising and they are not interesting individually, but the combination is the part most companies fail.

Four boxes in a grid: 01 Governance with clear roles. 02 Clean, integrated data. 03 In-house talent. 04 Dedicated ROI measurement. Each with a short description.
Each pillar is boring on its own. The combination is what compounds. Most companies miss one and lose the multiplier.

Governance with clear roles. Not committees. Roles. Someone is the directly-responsible individual for the AI initiative. They have authority to override product roadmaps. They report a number that maps to business impact. When the model is wrong, they are who answers for it. Most Spanish enterprises I see have a "comité de IA" with eight directors and no DRI; that committee is a license to not decide.

Clean, integrated data. The 20% are not running on pristine data — that does not exist anywhere. They are running on connected data. The customer record can be joined to the support history can be joined to the product usage. Cross-silo, queryable, current. The 80% have great data in 14 systems, none of which talk to each other.

In-house talent. People on staff who can read traces, design evals, debug a multi-turn agent failure. Not consultants. Consultants are great for scoping the first project; they are terrible for the second and third iterations because each iteration depends on context from the previous one. The 20% have at least one team that owns the AI stack end-to-end and stays there.

Dedicated ROI measurement. A P&L line for AI initiatives, not innovation theatre. The 20% measure AI ROI the same way they measure marketing ROI: by attribution, by cohort, by a number that ladders into revenue or margin. The 80% measure AI by deck slides at all-hands meetings.

Why Spanish companies usually end up in the 80%

There is a structural reason most Spanish enterprises sit on the wrong side of this distribution, and it is worth naming because it is fixable.

The AI budget in most Spanish companies lives inside "innovation" or "digital transformation". Those budgets are funded by HR or by central CIO functions. They have no P&L. They have no DRI in the business unit that the AI is supposed to serve. When the pilot ends successfully, the next stage — productizing the AI inside the operating unit — requires a budget transfer that nobody is incentivized to make.

The pilot dies in the gap between innovation and production. The team that built it does not own the next mile. The team that should operate it does not have budget for it. The AI sits in an internal demo for six months, gets quietly archived, and the company runs the pilot again two years later with a different vendor.

The Spanish companies in the 20% have done one specific thing differently: they put AI budget inside the operating units that consume the AI, with a DRI, with a P&L attribution. The "innovation" budget pays for exploration. The operating budget pays for production. The gap that kills 80% of pilots does not exist because there is no handoff.

This is not glamorous and no consultant slide deck will describe it this way. It is the single largest source of value loss I see in this market.

Two operational habits worth borrowing

Two habits I have seen separate the operators who land on the 20% side from the ones who stay on the 80%.

AI budget sits inside the operating unit that consumes the AI. Not innovation. Not a cross-cutting CIO function. Inside the unit whose P&L the AI is supposed to improve, with a DRI whose number ladders into business outcomes. When the model drifts, the team that owns the P&L feels it in revenue and prioritizes the fix without waiting for innovation to file a ticket. This is the single most predictive structural choice I see between companies whose AI compounds and companies whose AI stalls.

Quarterly reviews ask what is now possible, not what is now cheaper. The right question to put to every AI feature is: what could we do this quarter that we could not do last quarter? Faster is not enough. Cheaper is not enough. Both belong on the floor. Above the floor, the only question that matters is whether the capability has opened work that did not exist as a sellable artifact a year ago. Companies that ask this question quarterly drift toward the 20%. Companies that ask "did we save money?" drift toward the 80%.

Neither of these is a budget question. Neither requires a particular model contract or vendor relationship. Both are organizational disciplines that any company can adopt by Monday. Almost none do.

Closing

If your AI program is producing efficiency gains, that is good. It is also the easy half. The hard half is building products and services that did not exist before, and capturing value upstream of customers who would not have paid you a year ago.

Most of the next ten years of business performance variance is going to come from this gap. The companies that treat AI as a cost-reduction lever will keep their margins flat. The companies that treat it as a new-value generator will compound. Sixty percent of the gap shows up in five years; the rest shows up in the next two.

Efficiency is the floor. Optimizing today's processes prepares you for today. Building new products on top of the model prepares you for the day after tomorrow, which is the only day that ends up mattering.

Related

Want to talk about this?

Book a 30-min chat