The Leaked AI Model That Crashed Cybersecurity Stocks
TL;DR
Anthropic accidentally leaked an unreleased model called Claude Mythos — Fortune found roughly 3,000 unauthenticated assets, including draft blog posts and internal docs describing Mythos as a new tier above Opus with dramatically stronger coding, reasoning, and cybersecurity performance.
Anthropic says Mythos is its most capable model yet — and unusually strong at cyber — the company confirmed the model is real, called it a “step change,” and said it is “far ahead of any other AI model in cyber capabilities,” with plans to roll it out first to cyber defense organizations.
The hosts’ real takeaway is not the leak but the capability jump coming soon — they argue these frontier labs are always 6 to 12 months ahead of what the public sees, meaning Mythos and OpenAI’s next model, codenamed Spud, were likely trained months ago and are now in post-training and red-teaming.
Wall Street still got spooked even though this trend was obvious — after the leak, CrowdStrike, Palo Alto Networks, and Zscaler fell about 6%, Okta and NetScope dropped more than 7%, and Tenable plunged 9% just on the idea of a stronger AI model for cybersecurity.
The episode ties the leak to a broader AGI timeline that is moving faster than expected — using OpenAI’s five internal stages from Bloomberg’s July 2024 reporting, the hosts argue the industry went from level one chatbots to emerging level four “innovators” in about 20 months, with early level five “organizations” now starting to look plausible.
The Breakdown
The Anthropic leak that wasn’t supposed to exist
The episode opens with a Fortune exclusive: Anthropic accidentally exposed around 3,000 unpublished assets through an unsecured CMS, including draft blog posts, internal images, and documents about a secret UK CEO retreat. The big reveal was Claude Mythos, an unreleased model described as a new tier above Opus and much stronger on software coding, academic reasoning, and cybersecurity.
Mythos sounds powerful enough to make people nervous
Anthropic confirmed Mythos is real and called it a “step change” over prior models — their most capable model yet. What really lands is the company’s warning that Mythos is already far ahead of other models in cyber capabilities and hints at a wave of systems that can exploit vulnerabilities faster than defenders can react, which is why Anthropic reportedly wanted to start with cyber defense orgs before a broader release.
OpenAI’s “Spud” adds to the sense that something big is imminent
At the same time, OpenAI says it has finished pre-training its next major model, codenamed Spud, and Sam Altman reportedly told staff a very strong model could arrive within weeks and “really accelerate the economy.” The hosts read that phrasing a little darkly, joking that “accelerate the economy” may not mean “create more jobs,” and frame both announcements as signs that two major model launches may be very close.
Why the hosts keep saying you can’t plan around today’s models
Paul’s main point is that labs are always way ahead of public perception: the models people use now are not the frontier internally. He says companies like Anthropic and OpenAI are likely 6 to 12 months ahead, with these systems probably done training months ago and now just being polished through post-training and red-teaming, which is why “you cannot make plans based on your current experience with these models.”
The human error story — and the agentic future of finding leaks
There’s a very human layer here too: the hosts say they feel for whoever on the marketing or web team allowed this CMS exposure to happen, with one of them bluntly saying he’d imagine someone lost a job over it. But they quickly widen the lens, arguing that agents will make this kind of discovery much easier, since competitors or black-hat actors can just run automated systems 24/7 hunting for exposed assets and vulnerabilities.
Fortune’s restraint is almost as interesting as the leak
One detail they keep circling back to: Fortune reportedly found the exposure, brought in cybersecurity researchers, alerted Anthropic — and then didn’t publish the actual leaked materials beyond broad descriptions. The hosts speculate there may be some media-relations tradeoff or future exclusive involved, because it’s striking that a publication with access to blog posts, documents, and images chose not to dump the details.
Cybersecurity stocks got crushed on a story everyone already saw coming
The market reaction was sharp and, in the hosts’ view, a little absurdly delayed. CrowdStrike, Palo Alto Networks, and Zscaler each dropped about 6%, SentinelOne fell 6%, Okta and NetScope slid more than 7%, and Tenable fell 9% — all because a stronger cyber-capable model might be imminent, even though the hosts note the industry has been predicting this for two years.
From chatbots to level four in 20 months
The episode ends by connecting all this to OpenAI’s internal five-stage framework, first reported by Rachel Metz at Bloomberg in July 2024: level one chatbots, level two reasoners, level three agents, level four innovators, and level five organizations. The hosts argue we’ve already raced from level one to the edge of level four in about 20 months, expect innovators to be obvious by fall, and now think early signs of level five — AI doing the work of an organization — may arrive in some industries sooner than they ever wanted to be talking about.