Two-Thirds of Doctors, Zero Procurement Trail
Most U.S. physicians now log into a medical AI tool their hospitals never bought. The reckoning isn't here yet. Bentonville just showed us its shape.
The consensus read of the NBC piece is roughly: isn't it interesting that doctors love this AI tool. Nearly two-thirds of U.S. physicians now log into OpenEvidence, the article says, comparing the platform's reach to UpToDate and noting that clinicians authenticate with their NPI numbers. The framing is adoption-as-progress. Quiet, organic, bottoms-up — the way real tools always spread.
That read misses the actual story by one layer.
The number itself deserves a discount before we build anything on it. "Two-thirds of physicians" almost certainly means accounts created against NPI numbers, not verified active clinical use. OpenEvidence has every incentive to report the largest plausible figure to the largest plausible audience, and a registration-based denominator is how you get there. NBC reproduced the figure without flagging this. Treat the directional signal as real and the precision as marketing.
Even heavily discounted, the directional signal is the story. A material share of U.S. physicians are consulting an AI tool during clinical work, authenticated by their professional identifier, and their employers — the hospitals and health systems carrying the malpractice exposure — did not buy the product, do not have a contract with the vendor, and in most cases do not know which of their clinicians are using it for what.
That is not adoption. That is shadow IT at clinical scale.
The upstream phase
The interesting thing about the OpenEvidence story is not that it's happening. Shadow tooling always happens. The interesting thing is that healthcare is still in what I'd call the upstream phase — the period after adoption has occurred but before the institution has reckoned with it. No procurement review. No data-flow audit. No clarity on whether a clinician pasting a patient summary into a third-party LLM has just created a HIPAA event. No policy on whether an AI-assisted differential that turns out wrong is the clinician's judgment call or the tool's failure. None of the apparatus that would exist if the CIO had signed the contract.
The upstream phase feels calm because the consequences haven't arrived. That's exactly what makes it dangerous. Every month of unmanaged adoption is another month of accumulating exposure that no one is measuring, because the institution doesn't know it's there to measure.
What Bentonville just did
Walmart is downstream. The WSJ reporting frames this week's news as a 1,000-person layoff, which is how most outlets are covering it, and which obscures the structural signal underneath. The org change is the story: Walmart is merging its global-tech function with its AI product function, and the redundancies are what fall out when two previously separate orgs become one.
The layoff framing makes this a cost story. It's not. It's an admission that the previous org chart — built when AI was a product team inside a tech org — no longer matches how the work actually flows. Adoption ate the boundary between the two functions, and the company is restructuring to match a reality it didn't design. Reactive, not proactive. The humans in the overlap zone are the cost of the institution catching up to itself.
That's what the downstream of shadow adoption looks like when it's visible and measurable. Retail can do this in one news cycle because the consequence is an org-chart change and a severance line. The CEO can sign off and the WSJ can cover it.
Healthcare's downstream will not look like that.
What healthcare's reckoning will actually be
The first malpractice case that turns on AI-assisted clinical reasoning is going to force the question that hospital general counsel offices are currently not asking out loud: did our clinician use an AI tool, which one, was it sanctioned, and what is our liability for a piece of software we never bought. I don't know whether that first case names the tool, the clinician, or the hospital that didn't know its clinicians were using one. That gap matters, because it determines who pays for the upstream phase.
My guess is the hospital. Not because the clinician was reckless — most of them are using these tools the way they used UpToDate, as a reference, with judgment layered on top — but because the institution is the deep pocket and the institution failed to govern. "We didn't know" is not a defense; it is the allegation.
When that case lands, healthcare gets its Walmart moment. Except instead of an org-chart merger, it's a scramble to retrofit governance onto adoption that's already two or three years deep. Procurement reviews on tools already woven into clinical workflow. Data-flow audits on inputs that have already left the building. Policy frameworks written under deposition pressure rather than at leisure.
What healthcare IT could still do
The useful move, right now, in the upstream phase, is to assume the adoption number is real and start the work that would exist if the institution had bought the tool. Sanction the use or prohibit it; both are governance, indifference is not. Audit what clinicians are pasting where. Get a contract in place with the vendors that matter so the data-handling is documented. Decide who carries the liability and put it in writing.
None of this is fun work. All of it is cheaper than doing it on a litigation timeline. The Walmart restructuring tells you what reactive governance costs in retail. The healthcare version, when it arrives, will be priced in settlements and consent decrees, not severance packages.
The two-thirds figure is probably softer than it sounds. The exposure underneath it is harder than it looks.
Sources
Want to talk about this?
Get in touchMore on AI
Safe Enough Is Not a Legal Defense
Anthropic locked Mythos behind a cybersecurity wall the same week a Seoul prosecutor named ChatGPT in a fatal poisoning case. The duty-of-care question those events ask is the one the industry has been content to leave abstract.
The AI Equity Story Sorts Itself By Q1 2027
Michael Burry's largest disclosed position right now is roughly $912 million in Palantir puts. The reasoning isn't that AI is fake. The reasoning is that the market hasn't sorted what's real from what's narrative.
The Ratepayer Subsidy Powering AI
Maryland just asked FERC to stop a $2B grid bill from landing on residential ratepayers to fund out-of-state AI data centers. It's the AI infrastructure story enterprise buyers should be tracking and aren't.
