‘Speed is intoxicating’ - and that’s a problem
This week I attended Atlanta AI Week, a few days of conversations about where AI is taking us and what it demands of the people leading through it. Stephen Gates, one of the keynote speakers, had a very engaging talk with several takeaways. One was “Speed can be intoxicating.”
He wasn’t warning against moving fast. He was warning against what happens when speed becomes the point — when momentum gets mistaken for progress, when output gets mistaken for value. It’s a trap that’s been sprung before. And AI is about to spring it again, at a scale we haven’t seen.
The unsettling part is that this isn’t new thinking. The people who pioneered the frameworks we now mangle — Reis, McChrystal, Sutherland — all said the same thing in different contexts: speed is not the goal. It never was. And if you’re leading an operations team in an AI-saturated environment, that distinction has never mattered more.
Three Frameworks. One Misreading.
Jeff Sutherland, co-creator of Scrum, wrote a signature book on the framework with a title that did its job almost too well. By having “Doing Twice the Work in Half the Time” in the title, it was catchy and drew exactly the audience it was meant to draw. But for the many leaders who absorbed the title without reading the book, it planted a seed that has been growing in the wrong direction ever since.
Agile was never about delivering faster. It was about learning faster — surfacing what’s working and what isn’t before you’ve sunk a year of effort into the wrong direction. The sprint isn’t a race to the finish line. It’s a structured experiment with a built-in review. The speed serves the feedback loop. Without the loop, you’re just sprinting.
Eric Reis made the same argument in The Lean Startup. “Fail fast” was never an instruction to ship slop and shrug. It was a disciplined approach to testing assumptions cheaply, so that when something fails — and something always does — you fail small and learn fast. The build-measure-learn loop requires a human hypothesis at the start: what do we believe, and what would prove us wrong? Take out the learning and you haven’t simplified the framework. You’ve hollowed it out.
Stanley McChrystal’s argument in Team of Teams is different in kind but identical in spirit. The speed he pursued wasn’t the speed of shipping — it was the speed of decision-making at the edges of a complex organization. He rebuilt the Joint Special Operations Command not by adding more control, but by distributing trust. Decisions could happen faster because the people closest to the situation had the context, the authority, and the shared understanding of the mission to act without waiting for approval to travel up the chain. That’s a structural argument. It requires intentional investment in alignment. You can’t shortcut your way to it.
Three different domains. Three different books. The same principle underneath all of them: speed is a byproduct of doing the harder work well — not a substitute for doing it.
What AI Makes Worse
Large companies and small have fallen into the same trap: absorbing the language of agile, lean, and distributed leadership without doing the underlying work those frameworks require. It helps to have a common vocabulary. But at some point you have to do the work — and for leaders, that means rolling up your sleeves, getting close enough to understand what’s actually happening, and giving your team the information they need to operate in alignment with your vision.
AI doesn’t fix that gap. It widens it.
The speaker at that conference — Stephen Gates — named what AI produces without human judgment: “plausible noise.” That phrase is so apt; think about it. Plausible noise is confident. It’s structured. It sounds right. And it’s empty — because it was generated without the context that would make it true, the judgment that would make it useful, or the intentionality that would make it connected to anything that matters.
I’ve worked with many leaders who used Scrum as a shortcut to push output — to produce faster without thinking critically, to generate output without tying it to value or outcomes. The same thing is now happening with AI, at a much larger scale. AI can churn out work at a speed that would have been unimaginable five years ago. So what? Speed without context is just a faster way to produce the wrong thing. And plausible noise at scale is more dangerous than no output at all, because it’s harder to see what’s missing.
The Environment That Makes Speed Safe and Effective
Here’s what I keep coming back to: every one of these frameworks — agile, lean, McChrystal’s distributed command — requires the same foundational conditions to work. Not tools. Not velocity. Conditions.
Alignment on what matters and why. Without it, fast decisions are fast in the wrong direction. Teams can sprint with extraordinary discipline toward an outcome nobody actually wanted.
Psychological safety to surface what’s not working. Reis’s learning loop collapses without it. If people can’t admit failure or flag a bad assumption without consequence, you don’t have a feedback loop. You have a performance.
Trust that flows in both directions. McChrystal’s speed at the edges only works when leadership trusts the team with information and authority, and the team trusts that leadership has given them the right mission. That trust doesn’t come from an org chart revision. It’s built.
The same conditions determine whether AI tools accelerate real work or just accelerate noise. Judgment, context, and intentionality aren’t soft competencies to develop after you’ve deployed the tools. They’re the prerequisites. They determine whether the speed you gain is worth anything.
Leaders who are getting this right are treating AI the way the best agile teams treat their sprints: as a structure for faster learning, not faster output. They’re asking what the tool surfaced, not just what it produced. They’re reviewing AI-generated work with the same rigor they’d bring to any high-stakes recommendation. They’re using the speed that AI creates to create more room for the human judgment that AI can’t supply.
That’s not a technology strategy. It’s a leadership strategy. And it requires the same things good operations leadership has always required: being close enough to the work to know what’s real, and deliberate enough to build the conditions where your team can perform.
Bearing Check
Think about how your team is using speed right now — in decision-making, in delivery, in adopting new tools. Is that speed producing learning and better outcomes? Or is it producing activity? And if you’re honest about the difference, what would have to change?