Safe Enough Is Not a Legal Defense
Anthropic locked Mythos behind a cybersecurity wall the same week a Seoul prosecutor named ChatGPT in a fatal poisoning case. The duty-of-care question those events ask is the one the industry has been content to leave abstract.
Two men are dead in South Korea. Police say Kim So-young asked ChatGPT for lethal dosing advice on benzodiazepines, then served three men spiked drinks. Two didn't survive.
That case surfaced the same day the New York Times reopened the question of whether Anthropic's Claude Mythos, the model the company withheld from public release on cybersecurity grounds, really merited the lockdown. The Times frames it as a dual-use debate. It's also something else.
The withholding decision
Anthropic's position on Mythos, as reported, is that the model is too dangerous to ship. The company has not, to my reading of the coverage, defined the specific capability threshold that separates Mythos from the Claude versions enterprises are running in production today. The framing that current Claude is "safe enough to release" is largely a reporter's inference. Anthropic's actual claim is narrower: Mythos crossed a line. Where the line sits remains the company's call.
That ambiguity has been comfortable for the industry. As long as the threshold is internal, the standard is whatever the lab says it is. There's no external benchmark to fail.
The Seoul case
ChatGPT is the other shape of the same problem, from the opposite direction. OpenAI made the deployment call years ago: shipped, mass-distributed, with guardrails the company describes as appropriate. The Seoul case puts that judgment under a different kind of scrutiny than a Senate hearing does. A prosecutor now has a fact pattern (specific prompts, specific outputs, specific deaths) that maps onto whatever duty-of-care theory a court is willing to entertain.
I should be careful with the NBC reporting. It's police-sourced, and police narratives in cases like this foreground the AI angle because the AI angle is what makes the story travel internationally. The underlying facts — a defendant who asked an LLM for dosing guidance, a chatbot that produced something usable, two deaths — are what matter. The legal question is whether OpenAI's safety judgment at release survives those facts in court.
The duty-of-care vacuum
Here's what binds the two. In both situations, a company made a safety call. Anthropic by withholding. OpenAI by releasing. In both, the call may not foreclose liability for what the released models actually do.
Anthropic's withholding decision is the more interesting one legally. If Mythos was too dangerous because of capability X, and capability X exists in attenuated form in shipped Claude, a plaintiff's lawyer has a roadmap. The withholding becomes evidence that the company knows where the line is and chose where to draw it. Where the line gets drawn is now a question someone can litigate.
The Seoul case has no such internal-comparison evidence. It's a cleaner negligence theory: did the provider take reasonable care, given foreseeable misuse. The answer depends on a body of doctrine — Section 230, product liability, the still-unsettled question of whether an LLM output is a product or a service — none of which has been resolved against AI providers in any court that matters.
This is where the "first major precedent test" framing in the original reporting overstates what's actually on the docket. Korean criminal proceedings against a defendant are not going to generate the U.S. doctrine that enterprise legal teams need written into procurement contracts. The civil shoe, if it drops, drops later, possibly in a different jurisdiction, against a different defendant.
What is actually new
The duty-of-care vacuum can no longer be argued in the abstract. Until this week, the question of AI provider liability was a thought experiment with hypothetical victims. It now has named defendants, dead victims, and a model a company itself called too dangerous to release. The vacuum has facts in it.
I don't know which case generates first-order precedent, or whether either does. The Seoul prosecution may resolve without touching OpenAI's posture at all. The Mythos discourse may stay in the policy commentariat and never enter a courtroom. That gap matters, because betting on which case forces the question is different from acknowledging that something now forces it.
The duty-of-care vacuum has been a comfortable place for AI providers to sit precisely because nothing was making the question concrete. Something is now making it concrete. The companies counting on the vacuum to hold should make a different bet.
Sources
Want to talk about this?
Get in touchMore on AI
The AI Equity Story Sorts Itself By Q1 2027
Michael Burry's largest disclosed position right now is roughly $912 million in Palantir puts. The reasoning isn't that AI is fake. The reasoning is that the market hasn't sorted what's real from what's narrative.
The Ratepayer Subsidy Powering AI
Maryland just asked FERC to stop a $2B grid bill from landing on residential ratepayers to fund out-of-state AI data centers. It's the AI infrastructure story enterprise buyers should be tracking and aren't.
The AI Dividend Picks a Direction
Cloudflare just put AI on the receipt for 1,100 layoffs at record revenue. That makes the productivity story real and the 'AI creates more jobs' argument harder to sustain. The dividend has a direction.
