A few weeks ago we hosted ProductTank Madrid at our offices. The topic was an obvious one — what changes when AI enters a product team — and the conversation went somewhere most product talks don't. I want to write down what stayed with me.
The premise was practical: how is product collaboration changing now that AI sits inside the toolchain? The honest answer, from people actively building, was less optimistic than the conference circuit version. Speed is real. Alignment is harder than ever. The two are not the same problem, and conflating them is producing a generation of teams that ship more and understand less.
Compression of the ideation-to-prototype cycle
Bruno Machado from Miro showed what their product team has been doing internally. Brainstorming sessions used to end with a wall of sticky notes that someone would have to type up, sort, and turn into a brief. Now the brainstorming session ends with a functional prototype. Same room, same people, same hour — but the artifact at the end is something you can click on instead of something you can read.
The technical pipeline is straightforward and that's part of why it works. Miro reads the canvas; an LLM converts the visual structure into a component description; a code generator produces a working React prototype. The handoff is internal. The friction that used to live between "we had an idea" and "we can test the idea" has collapsed into the same conversation.
The catch — and Bruno was honest about it — is that prototypes generated this fast can lie convincingly. A prototype that looks polished looks like it's been thought through. That's the deception this kind of speed introduces. The artifact is high-fidelity; the thinking behind it can be anywhere from rigorous to nonexistent. Telling the difference is now part of the product manager's job in a way it wasn't five years ago.
Federico Casabianca at Eduki extended the same thread. They wired Miro to Cursor through MCPs, so that an idea sketched on the canvas can become real code in the same flow. The boundary between design and development has been operationally erased for some kinds of work. A designer can ship a feature. A product manager can write the first cut of the implementation. The tools allow it; the team conventions are catching up.
I have mixed feelings about this and they came up in the conversation. The collapse is real and useful. It's also a different operational discipline than the one we built up over the last twenty years of product development, and the people who are about to live in this collapse haven't fully realized how much of their previous job depended on the gaps that just disappeared.
The contrapunto: performative alignment
Manu Abuín brought what was, for me, the most important word of the night: performative alignment.
The phenomenon: a team produces an artifact that looks polished and aligned — a deck, a spec, a prototype, a Notion doc — and everyone signs off because it looks like the work has been done. But the artifact is generated, not authored. The act of producing it didn't force the team to argue, debate, converge on a definition, or surface the disagreements. The artifact is wrapping paper, and inside there's nothing.
Performative alignment looks identical to real alignment from the outside, which is why it's dangerous. A team can spend months producing convincing documents and shipping plausible features without ever having had the conversation that would surface the disagreement at the core. When the disagreement finally hits — usually in a customer call, or a launch postmortem — it presents as a sudden crisis. It wasn't sudden. It was buried under a stack of well-formatted PDFs.
Manu's line stuck with me: critical thinking is going to be the scarcest capacity in this transition. I'd add a corollary: the teams that don't deliberately design for it will lose it without noticing. Because the tools are doing the visible work — the writing, the structuring, the formatting — the team's instinct to question, push back, and demand definitions will atrophy unless the team makes a point of preserving it.
When you remove a friction, ask what work it was doing
AI multiplies a team's capacity, but only if the team knows where it is going. Technology amplifies what is already there. If the alignment is real, AI compounds it. If the alignment is performative, AI produces more performative artifacts, faster.
The operational pattern that keeps repeating across the AI rollouts I have observed is this: a team removes a friction — drafting an email, summarizing a conversation, generating a brief — and the local metrics improve. Throughput goes up, response rates go up, time-to-completion drops. Three months in, a second-order metric quietly degrades. Relationship depth. Decision quality. Customer satisfaction on the dimension nobody was tracking.
The diagnosis, when it comes, is almost always the same. The friction that was removed had been doing a job nobody had named. It forced a moment of thinking before the action. When the friction left, the thinking left with it, and the workflow optimized for the metric being measured while damaging the metric that actually mattered.
The fix is rarely to remove the AI. It is to redesign the workflow around it — to reintroduce the cognitive step the friction used to enforce, this time intentionally. The metrics return to where they should be. The lesson is the one every product team should internalize before the next rollout: when you remove a friction, ask what work the friction was doing. Sometimes the friction is overhead. Sometimes the friction is the work.
What I'm watching
Three things I'm paying attention to in product teams that have integrated AI seriously over the last twelve months.
Decision logs. The teams that keep a tight log of decisions — the ones that explicitly name what was decided, by whom, against which alternative — are the ones that don't lose alignment when generation accelerates. The artifact production gets faster; the decision-making rate has to stay deliberate, and the only way to know it has is to have a log to look at.
Conflict tolerance. When the cost of producing a draft drops to near zero, the cost of disagreeing with the draft has to come down too. Otherwise the path of least resistance is to approve the generated output and move on. The teams that build a culture where it's normal to push back on AI-generated artifacts — not as a process gate but as a habit of mind — preserve the critical thinking Manu was talking about.
Friction inventories. Periodically I have teams list the frictions they've removed over the last quarter and label each one as overhead or work. The exercise is uncomfortable. Most teams find that 20–30% of the frictions they were proud of removing were doing work they hadn't accounted for, and the work is now missing.
The thing nobody said
There was a question from the floor I keep thinking about. Someone asked, roughly: how do you tell if your team is using AI well?
The answers from the panel were good — measure shipping velocity, look at the quality bar, check the regression in customer outcomes. All true. The one I'd add: look at the meeting transcripts.
In a team using AI well, the meetings get harder, not easier. The questions get sharper. People come in with more material to discuss because they've had AI help them prepare, and the meeting itself becomes the place where the human judgment compounds. The artifacts produced before the meeting absorb the busywork; the meeting absorbs the thinking.
In a team using AI poorly, the meetings get smoother. Everyone has a draft of the same document, generated by the same tool, expressing roughly the same position. The conversation is brief because there's not much to disagree about. The room feels productive. Three months later, nobody can remember what was decided, and the implementation diverges from the deck because nobody actually owned the underlying decision.
The transcripts are the tell. Look at yours.
Closing
Speed is the easy story. It's the one the conferences want and the one the vendors sell. Alignment is the harder story, and it's where the next phase of differentiation lives. AI gives you more time, more output, more options. What it doesn't give you is judgment about which of those outputs are worth shipping. That part stays with the team.
Thanks to Alex Swiec and the Miro team for organizing, to Bruno, Federico, and Manu for a conversation that was honest in a way panels usually aren't, and to everyone who stayed for the conversation after the talks. ProductTank Madrid is one of the few rooms in this city where the question being asked is the right one. See you at the next one.