AI washing is a buyer signal
When a client blames AI for cuts that have nothing to do with AI, that's not transformation. That's theater. And it tells you exactly how the engagement is going to go.
Sam Altman just said out loud what anyone running enterprise AI engagements has been watching for a year. Companies are blaming layoffs on AI when AI had nothing to do with them. He framed it as a temporary thing, a head fake before real displacement shows up in the data. I read it differently. I read it as a tell.
When a client cites AI as the reason for cuts before they've actually deployed anything that works, that's not transformation. That's a press release looking for a justification. And once you learn to spot it, it changes how you scope the work.
What AI washing actually looks like in the room
It looks like a CFO who's trimming 8% of headcount for reasons that pre-date ChatGPT, and a comms team rewriting the announcement to mention productivity gains from generative AI. It looks like a steering committee that has eleven pilots, zero in production, and a board deck claiming AI-driven efficiency. It looks like a request for proposal that opens with ambitious language about agentic workflows and closes with a scope that's basically a staff reduction plan with a chatbot stapled on.
None of that is AI work. It's cost-cutting wearing AI as a costume. The economists Altman is responding to are right on the data. There isn't a real signal of AI displacement in the labor numbers yet. What there is, is a convenient narrative.
Why this matters for the engagement
If a client is using AI as the public reason for something AI didn't cause, you can predict the rest of the engagement with uncomfortable accuracy.
The budget will be smaller than the ambition. The success metric will be soft, because no one actually expects measurable productivity. The internal sponsor will go quiet around month four, because the cuts already happened and the political work is done. And when your team ships something real, there's no operating model ready to absorb it. The org was reshaped around a story, not a capability.
I've seen this pattern enough times to call it. The tell is usually in the first scoping call. If the buyer can't name a specific workflow, a specific user, and a specific before-and-after, but can name a headcount target, you're not being hired to build. You're being hired to validate.
What I'm doing about it
I ask earlier, more directly. "What changes for the people still in their seats after this ships?" If the answer is fuzzy, the engagement is fuzzy. If the answer is sharp, even when it's painful, the work tends to land.
I also push harder on a written success definition before signing. Not a slide. A paragraph the sponsor will repeat to their boss in six months. AI-washed programs can't write that paragraph because the actual goal isn't something anyone wants on the record.
And I've gotten more comfortable walking. Not every deal is the right deal. A client who wants AI as a cover story will eventually need someone to blame when the cover story falls apart. That someone is usually the consultant.
The part most people miss
Altman's admission isn't really about CEOs being dishonest. It's about how fast the narrative ran ahead of the technology. The labor data hasn't moved yet. The deployments haven't matured yet. But the org charts have already been redrawn on the assumption that both will, soon. That gap between story and substance is where bad engagements live.
My read is that the next twelve months sort buyers into two camps. One camp is going to quietly rehire some of the roles they cut, because the AI didn't actually do the job. The other is going to do the harder work of redesigning how the job gets done, with software in the loop and humans where they need to be. The second camp is who I want to be in a room with. The first camp will be writing a different kind of press release by this time next year.
Worth a beat before you take that next meeting.
Sources
Want to talk about this?
Get in touchMore on AI
Earnings, Lawsuits, and a Union Vote
Five AI stories crossed my desk this morning. Read together, they all point the same direction: the boring, operational, consequence-bearing phase of AI has finally arrived.
Wall Street Blinks, Washington Stalls
Two AI stories from this morning that are really one story. The money side is finally asking hard questions about returns. The policy side is still asking permission to ask questions at all.
The Week the Quiet Parts Got Loud
Five AI stories from the last day, and a thread runs through most of them. CEOs admitting things they used to spin, a court drawing a line, and one investigation that should make every responsible-AI team pay attention.
