The Quiet Shift in Enterprise AI Buying
Six months ago every conversation started with 'help us figure out where AI fits.' Now they start with 'we have eleven AI projects, none of them are in production, fix this.' That second conversation is a totally different sale.
Six months ago every first call I took started with the same sentence. Some version of, help us figure out where AI fits. The buyer had budget and a steering committee and three vendors lined up to do bake-offs, and what they wanted was a partner to help them think out loud about strategy. Those were good engagements to be on. They were also, in retrospect, the easy ones.
That conversation has changed. The first call I took this week opened with, we have eleven AI projects, none of them are in production, and we need to know which to kill and which to actually ship. The budget for next year is contingent on figuring it out before the board meeting. The buyer didn't want strategy. The buyer wanted a knife.
That is a totally different sale. Most of the consulting firms I know are not built for it.
What changed in the buyer's seat
The first generation of enterprise AI engagements were exploratory by design. A budget got allocated, a working group was convened, three or four POCs were scoped, and the firms that won the work were the firms that could most credibly help the buyer feel like they were on top of a moving thing. The success metric was usually "we now have a point of view on AI." Most programs hit that bar. Some of them produced a chatbot. Almost none of them produced anything that ended up in a quarterly report.
Now the bill is coming due. The CIO who allocated $4M last year to "explore generative AI" has nothing to show the CFO except a slide deck and a cloud spend nobody wants to defend. The board is asking what shipped. The honest answer is, mostly, demos. The buyer needs the next conversation to look very different from the last one.
What that conversation actually wants
It wants triage, not strategy. The buyer already knows where they want AI to fit. They have eleven projects that all looked good in committee and none of which made it into the hands of a real user. What they need is someone who can sit in front of those eleven projects and say, with conviction, which two are real, which six were never going to ship under any leadership, and which three need to be rebuilt from scratch with a different problem statement. Then they need someone who can actually take the two real ones over the line.
That work is not exploratory. It is editorial and operational. It rewards opinions, fast scoping, and a delivery team that has actually shipped AI features into production for a paying business. It punishes consultants who want to do another readiness assessment.
Why this is harder for most firms
The big consultancies built their AI practices on the first conversation. The pricing model assumes a long discovery, a working-group cadence, and a slide-heavy quarterly review. The bench is mostly strategists who can run a workshop and analysts who can build a maturity model.
That bench can't ship. The buyer in the second conversation is not interested in another workshop. They've had the workshop. They have the deck. They have the maturity model on a shelf. What they don't have is anything in production.
The firms that win the second conversation are smaller, more opinionated, and built around engineering teams that have actually put generative AI features in front of users and watched them fail and watched them work. That is not where most of the industry's AI revenue currently sits. It is where it's about to.
Three things to watch for in the pipeline
If this is mapping to what you're seeing, three signals are worth tracking deliberately:
- More inbound, narrower briefs. Buyers aren't asking for capability decks anymore. They're describing a specific stuck project and asking if you've unstuck one like it.
- Shorter sales cycles. Diagnostic work closes faster than discovery work, because the budget is already allocated and the political pressure is already there.
- Sharper references. "We unstuck a stalled enterprise AI program" lands harder in a first meeting than "we helped them think about AI." If your case studies still read as the second one, they're aging out.
What I'm doing about it
I'm scoping faster and saying no more often. The buyer who needs the second conversation cannot afford a six-week scoping exercise. They need a yes or no on what's worth saving inside a week, and a delivery plan inside two. If I can't do that, somebody else will, and the engagement starts looking less like consulting and more like an emergency room visit.
I'm also being more honest about the eleven projects. Most of them deserve to die. Telling the buyer that, in the first call, has been a better way to win the work than any framework I've ever pitched. The buyer knows. They want a partner who'll say it.
What's coming
The next twelve months sort the AI consulting market into two camps. One camp keeps selling the first conversation, because that is what its bench is built for, and the buyer who's still having that conversation is shrinking fast. The other camp learns to sell the second one, which means staffing engineers instead of strategists and being willing to name which projects deserve to die.
The first camp is going to be worth less by the end of next year. The second is going to look a lot like what enterprise AI consulting actually is, once the noise dies down.
Worth thinking about which one your firm is set up to be in.
Want to talk about this?
Get in touch