Jack Dorsey and Roelof Botha just published something called From Hierarchy to Intelligence — Block's internal operating model for AI-native organizations, written up in public. It is being read as an architecture document. It is actually a strategy document, and the strategy is the part that matters.
The thesis, in one line: in most companies today, the traditional product roadmap is the single biggest factor limiting what AI can do for the business. Not budget, not stack, not talent. The roadmap. Because the roadmap was designed for a world where features were the unit of progress, and AI-native organizations don't progress feature by feature.
I have been mulling over their framework for two weeks. Some of it transfers directly to what we are building at Shakers. Some of it doesn't fit a marketplace context. Most of it deserves more attention than it has gotten, especially among Spanish operators who are still running quarterly OKR cycles like it is 2019.
Capabilities are the atomic unit, not features
The first piece of the architecture is capabilities. Not features. Not products. Atomic, actionable primitives.
For Block, the example they give is a clean one: "match talent to a job at deployment-grade quality". That is a capability. It is small, it is named, it has a clear contract (input, output, quality bar), and it can be reused across many downstream products.
The reason this matters is that capabilities are durable. A feature has a six-month lifespan and lives inside one product. A capability lives at the company level and shows up in a dozen places. When you list your capabilities and find five that no competitor can replicate inside six months, you have your moat. When you list them and find that all of them are commodity, you don't.
The order of operations Dorsey and Botha insist on is the part most companies skip. Define capabilities before you choose vendors. Not after. The 2024 pattern was: pick GPT-4 or Claude, then figure out what to build with it. The 2026 pattern they argue for is: list the five capabilities you need, define their quality bars, then pick the cheapest combination of models, tools, and partnerships that meet those bars.
That order matters because vendor choice is reversible and capability inventory is not. You can swap models in a week if your capability contracts are clean. You cannot swap a strategic capability in a year if you built it on the assumption that GPT-4 would always be cheaper than its replacement.
The world model is your honest signal
Layer two is the world model — the business entities the AI operates on, captured honestly. For Block: customer accounts with stack, AI maturity, agents deployed, real engagement history. The point is "honest" — the world model is not the marketing CRM, it is the operational reality.
At Shakers, our world model is the talent graph: every freelancer, their stack, availability, rate range, past hire signal, recent activity. The honest version of that graph is not the same as the user-facing profile. The user-facing profile is what the talent wants you to see. The honest world model is what we have observed across thousands of hires.
The Block argument is that the quality of your AI is bounded by the quality of your world model. If your world model is missing 30% of customer signal, your AI is missing 30% of customer signal, and there is no model upgrade that fixes that. Garbage in, garbage out, except more expensive because you can now generate garbage at scale.
The intelligence layer composes, it does not build
Layer three is where the model lives, but Dorsey and Botha are careful about this. The intelligence layer is a composition layer. It assembles capabilities into systems. When a capability is missing — say, you need to extract intent from a customer message and you do not have a capability for that yet — you do not build the intelligence layer first. You go back to the capability layer, build the missing primitive, and then the intelligence layer composes naturally.
This is the inversion most companies are getting wrong. They build the agent first, find out it needs a capability that does not exist, and then they monkey-patch the missing piece inside the agent. Six months later the agent is a 4,000-line prompt with eight hidden capabilities embedded in it, and the team that built it is the only team that can maintain it.
The Block model says: when an agent needs a capability you don't have, build the capability first, with a clean contract, eval suite, and ownership. Then have the agent call it. The capability is now reusable. The next three agents that need it can call it too. The next-quarter version of that agent is composable instead of monolithic.
Interfaces are delivery, not value
Layer four is interfaces. Dashboards, UX, platforms. The Block authors are blunt about this: interfaces are how value is delivered, not where value is created. A beautiful dashboard with no underlying intelligence layer is a slide deck. A scrappy CLI on top of a deep capability stack is a product.
I think Spanish enterprise has gotten this layer wrong for two decades, optimizing for slick interfaces on top of thin systems. The AI-native version of this mistake is worse because the interface can hide much more. A polished chatbot UI on top of a wrapper over a third-party API looks like a product. It is not.
The test the framework proposes is: if you remove the interface, what remains? If the answer is "nothing, the value was in the chat window", you have built a brand. If the answer is "the underlying capabilities still serve other surfaces", you have built infrastructure. Infrastructure compounds. Brands don't.
Three roles, three time horizons
Block organizes teams around three roles, and this is the part I have spent the most time chewing on for Shakers.
ICs build and operate primitives. They own a capability and they keep it healthy. They are not on a project. They are not "shipping a feature this quarter". They own a piece of infrastructure indefinitely. This sounds like a research scientist role; it isn't. It is closer to an SRE — operator-with-deep-knowledge.
DRIs are the directly responsible individuals for cross-team outcomes. They hold authority for 90 days at a time. They span teams. They can override roadmaps inside their window if they need to assemble capabilities differently. The 90-day cap is the part I find smart: it prevents DRI authority from calcifying into permanent middle management.
Player-Coaches build and develop people. They are the only people with permission to spend half their time on mentorship. The rest of the org expects them to mentor at this ratio; the org structure budgets it.
The shape this gives a team is unusual. There is no "engineering manager" in the traditional sense. There is an IC who owns a capability, a rotating DRI who composes capabilities into a product outcome, and a Player-Coach who is responsible for the growth of the next IC and the next DRI. It is closer to a sports team than an org chart.
Three things I keep coming back to
I would not copy Block's framework wholesale. A marketplace context introduces constraints that a payments business does not have, and frameworks travel badly across organizational shape. But three pieces of it keep working in my head.
Capabilities before vendors. The discipline of naming, contracting, and writing a baseline eval before a model integration is approved sounds obvious and is almost never how it goes in practice. If a team cannot write the eval, they do not have a capability — they have a wish. Most AI roadmaps are made of wishes presented as capabilities.
Honest world model. The version of business reality that the AI operates on is rarely the same as the version the marketing CRM shows. The gap between the two is where most production AI fails silently. Naming this gap explicitly — and forcing the AI to query the honest version — is more important than picking the right model.
The 90-day DRI rotation. A directly-responsible individual with cross-team authority capped at 90 days is the part of the framework I find most counterintuitive and most worth thinking about. The cap is what prevents authority from concentrating into permanent middle management. The risk is that 90 days is too short for complex initiatives. The upside is clean handoffs.
Closing
The Block framework is good not because it is novel — most of these ideas live in books from 2010 — but because it forces a specific order of operations. Capabilities, then world model, then intelligence layer, then interfaces. Not the other way around. Most companies build inverted: they ship an interface, monkey-patch the intelligence to make it work, run on whatever world model the previous CRM left behind, and treat capabilities as an afterthought.
That order produces software that demos well and rots fast. The Block order produces software that demos slowly and compounds. In an AI cycle where leverage is everything, I know which one I would rather be building.