Speed Is Not the Goal.The Outcome Is.
When I sit down with an ops leader in the middle of a turnaround or a market pivot, I don’t start with the org chart or the tech stack. I start with friction. Where does the work slow down? Where does the handoff break? Where does your team spend energy that doesn’t move anything forward?
That question usually opens up fast, because ops leaders live in friction. They know exactly where it is. What takes longer is the next question: what does the optimal state actually look like? Not the fixed version of what exists — the version designed around the outcome you’re trying to achieve, for your customer and for your team.
That distinction is where most transformations go sideways. And while I’m going to use AI as the primary example here, the principle is older and broader than any single tool. It applies to any significant work redesign, any process overhaul, any operational transformation. The tool is secondary. The question is everything.
The Gap Between Activity and Outcome
Here is what the current moment looks like from the outside: AI adoption is nearly universal, results are uneven, and almost everyone is confused about why.
McKinsey’s most recent workplace AI survey found that while roughly 80 percent of companies are now using generative AI, only 1 percent of C-suite respondents describe their implementations as “mature” — meaning AI is fundamentally changing how work is done and driving substantial business outcomes. Everyone else is somewhere in the middle: running pilots, seeing efficiency gains in pockets, watching the demo look good while the needle on business performance stays stubbornly still.
Deloitte’s 2026 State of AI in the Enterprise report adds another layer. Two-thirds of organizations report productivity and efficiency gains from AI — real gains, not imaginary. But only 20 percent say they’ve seen revenue growth, while 74 percent say revenue growth is exactly what they were hoping for. The gains are real. They’re just not the gains anyone was after.
Automating a broken process doesn’t fix it — it just executes the broken logic faster and at greater scale.
Deloitte’s research also surfaces the segmentation clearly. A third of organizations are using AI at a surface level with little or no change to underlying processes. Another third are redesigning key processes around AI. Only the final third are deeply transforming — reinventing core processes or building new business models entirely. All three groups are capturing efficiency gains. Only the last group is reimagining the business.
This is the pattern: efficiency is accessible. Transformation requires something harder. And the difference almost always comes down to whether you started with the outcome question or with the tool.
What Netflix Understood Twice
Netflix understood this in a way that most companies still don’t, and they understood it more than once. When Reed Hastings and Marc Randolph built the DVD-by-mail model, they didn’t ask how to make video rental faster. They asked what the customer actually wanted — broad selection, no late fees, no trip to the store — and designed the entire operation backward from that answer. The process that emerged looked nothing like a video rental store. That wasn’t accidental. It was the point.
Then they did it again. When streaming became viable, they didn’t add it as a feature on top of the existing model. They asked the outcome question fresh: what does “access to content” mean when bandwidth removes physical constraints? The answer required them to cannibalize a business that was working. That’s a harder call than it sounds when you’re in the middle of it.
Each transformation required them to resist the pull of their own operational success. The thing that made the previous model run well was exactly what they had to be willing to set aside when the outcome question pointed somewhere else.
Microsoft has articulated the same discipline from the inside out. In documenting their continuous improvement work, their leaders have been explicit that sequence matters: fix the process first, then apply AI. Their own framing is direct — running continuous improvement before AI deployment “keeps you from automating a broken process and focusing AI’s abilities in the wrong direction.” That’s not a caution about the technology. It’s a statement about what question has to come first.
The Practical Tension
For ops leaders, this is where it gets hard. You are paid to make the current system perform. Your credibility is built on execution. And now you’re being asked to question whether the system itself is the right design — while keeping it running, while managing a team that is already stretched, while reporting progress to leadership that wants visible wins.
There’s a structural pressure compounding this. PwC’s research on AI transformation observes that the organizations seeing the strongest results run AI as a top-down strategic program: senior leadership picks the specific workflows where investment will have outsized payoff, then applies the full weight of talent, technical resources, and change management behind those bets. The organizations that struggle tend to crowdsource their initiatives, letting adoption accumulate without strategic coherence. The result is impressive adoption numbers attached to modest business outcomes. The activity is real. The outcomes are modest because no one asked the hard question before the projects started.
PwC also notes that nearly half of executives say translating AI principles into operational processes has been a genuine challenge. It’s easy to read that as a skills problem or a change management failure. More often, it’s a design problem. The principles were sound. The processes they were layered onto were built for a different purpose — and the seams show.
The way through that tension isn’t to ignore the efficiency work. Tactical improvements buy time and build trust. But they need to run in parallel with a harder conversation: are we designing toward the right outcome, or are we getting better at the wrong thing?
Designing Toward the Right Outcome
That conversation starts with your customer and works backward. What are they actually trying to accomplish? Where does your current process serve that, and where does it get in the way? What would you build if you were starting from the outcome rather than inheriting the process?
This is the discipline that separates the organizations in Deloitte’s top third — the ones genuinely reimagining their businesses — from the larger group executing AI deployments against existing process maps. The tools available to both groups are often identical. The starting question is different.
Any tool — AI or otherwise — belongs in that conversation as a design input, not a starting point. The question isn’t where can we apply this capability to what we already do. It’s what becomes possible if we redesign the work with this capability as something native to it rather than layered on top.
That’s the difference between optimization and transformation. Optimization improves the system you have. Transformation starts with the outcome you’re trying to deliver and builds a system that serves it. In a turnaround, only one of them changes your position.
BEARING CHECK
When you look at where your team is spending the most energy right now — are those efforts designed around the outcome you’re trying to deliver, or around the process you inherited? And if you had to draw a clear line between the two, how wide would the gap be?