Alea
Back to Podcast Digest
The Artificial Intelligence Show Podcast··1h 29m

Ep. 207: OpenAI v. Anthropic Feud, Claude Mythos Leak, Brutally Honest CEOs & Data Center Moratorium

TL;DR

  • The OpenAI–Anthropic fight is personal, political, and now market-moving — Paul Roetzer says the Wall Street Journal’s deep dive shows this rivalry goes back to 2016 power struggles among Sam Altman, Greg Brockman, Dario and Daniela Amodei, and it now shapes everything from enterprise adoption to government contracts and IPO races.

  • A leaked Anthropic model, Claude Mythos, hints at another sharp capability jump — Fortune found roughly 3,000 exposed CMS assets describing Mythos as a tier above Opus with especially strong coding, reasoning, and cybersecurity abilities, while OpenAI’s next major model “Spud” is also reportedly nearing release.

  • CEOs are starting to say the quiet part out loud about jobs — Uber CEO Dara Khosrowshahi said AI could replace work that 70-80% of humans do within a decade, and PwC US CEO Paul Griggs warned employees who try to opt out of AI “are not going to be here that long.”

  • The hosts think entry-level knowledge work is the biggest unresolved labor problem — Roetzer argues AI-forward managers with domain expertise may become more valuable, but he keeps coming back to the same question: if AI can handle the tactical work juniors used to do, what exactly are companies hiring entry-level workers to learn on?

  • AI politics are hardening fast around data centers, defense, and deregulation — a federal judge blocked the Pentagon’s anti-Anthropic designation as likely unlawful, Bernie Sanders and Alexandria Ocasio-Cortez proposed a nationwide data center moratorium, and multiple pro-AI PACs are preparing to spend nearly $300 million pushing accelerationist candidates.

  • SmarterX’s own experiments show the real unlock is bespoke workflows, not generic AI magic — Roetzer used a 1,400-word prompt to generate an interactive AI transformation system prototype, while Mike Kaput got Claude Code to turn course scripts into branded slide decks, cutting a multi-hour task down to around 20 minutes.

The Breakdown

The feud underneath the AI industry

The episode opens on a big frame: five companies may end up deciding huge parts of the economy, geopolitics, and business. That leads into a Wall Street Journal investigation on the OpenAI–Anthropic rivalry, which Paul and Mike treat less like gossip and more like the operating system of today’s AI power structure.

Dario, Brockman, Altman, and the decade-old resentment

Mike walks through the Journal’s reporting: Dario Amodei saw Elon Musk’s layoffs as cruel, viewed Greg Brockman’s early AGI ideas as borderline treasonous, and later watched Sam Altman make overlapping promises about who would lead OpenAI. Paul adds texture from years of covering this beat, arguing the split wasn’t just philosophical — it was about credit, control, and who got cut into the room when language models started to look world-changing.

Why this history matters right now

Paul zooms out and says this isn’t ancient drama; it explains today’s enterprise battles, government contract fights, and why Anthropic is winning over some business buyers. He lays out the frontier-lab map — Google DeepMind, OpenAI, and Anthropic as tier one; Meta and xAI behind them — and warns that these firms are racing across key dimensions like agents, computer use, memory, reasoning, and recursive self-improvement.

Claude Mythos leaks, and the labs are clearly farther ahead than users think

The next segment starts with Anthropic accidentally exposing 3,000 unpublished assets through its CMS, including material on an unreleased model called Claude Mythos and an invite-only CEO retreat in the UK. Paul’s main takeaway is blunt: there is always a more powerful model in training, and the public is always seeing reality 6 to 12 months late because labs are already in post-training and safety testing on the next thing.

Cyber panic and OpenAI’s “AGI deployment” language

Mythos is described as stronger than Opus, especially in coding, academic reasoning, and cyber, and Wall Street immediately punished cybersecurity stocks like CrowdStrike, Palo Alto Networks, Okta, and Tenable. At the same time, OpenAI says it has finished pre-training its next major model, code-named Spud, and Paul connects that to OpenAI’s internal five-stage ladder from chatbots to reasoners, agents, innovators, and eventually “organizations” — AI that can do the work of an organization.

CEOs stop sugarcoating the labor impact

The most charged part of the episode is the discussion of Uber CEO Dara Khosrowshahi and PwC US CEO Paul Griggs. Paul says what he’s been hearing privately for more than a year is finally leaking into public comments: executives have been planning around labor displacement while telling employees and markets a softer story.

The part that keeps Paul up at night: entry-level work

Roetzer says AI-literate managers and directors who combine domain expertise with AI fluency should do well, while people who refuse to adapt may become unemployable. But the emotional center of the segment is his repeated concern about junior roles — if the administrative and tactical work becomes promptable, he genuinely doesn’t know what new workers are supposed to do to build careers.

Politics, data centers, security scares, and building with AI anyway

The back half moves fast: a judge blocks the Pentagon’s anti-Anthropic move as “Orwellian,” Sanders and AOC propose a data-center moratorium, and new dark-money AI groups gear up to spend more than $100 million each on pro-deregulation politics. The hosts also hit a supply-chain attack on the widely used LiteLLM package, Apple opening Siri to rival assistants, and their own internal workflows — where the most practical lesson is that AI gets truly impressive when it’s shaped around your exact process, not used as a generic toy.