Alea
Back to Podcast Digest
Alex Kantrowitz··1h 1m

Why OpenAI Killed Sora, Did Apple Just Save Siri?, Meta’s Big Loss

TL;DR

  • OpenAI didn’t kill Sora because video AI flopped — it killed it because Sora sits on a different “branch of the tech tree” than GPT reasoning models. Alex says Greg Brockman told him Sora’s world-model approach competes for scarce compute and focus with the GPT-style systems OpenAI now sees as the clearest path to stronger agents, coding, and an IPO as soon as Q4.

  • The AI race is consolidating around one prize: an always-on agent that can use your desktop, phone, apps, and memory to do real work. Alex and Ranjan argue OpenAI and Anthropic are now converging on the same “open-claw” vision, while companies like Sierra, Notion, Intercom, Cursor, and Writer are all chasing the same autonomous-knowledge-work opportunity.

  • Apple’s Siri update sounds bigger than it is — adding rival assistants inside Siri may just recreate today’s clunky ChatGPT handoff. Bloomberg reports Apple plans to let App Store AI chatbots integrate with Siri in iOS 27, but both hosts think this looks less like “Siri is fixed” and more like Apple taking a cut of third-party subscriptions without solving Siri’s core weakness.

  • Anthropic and OpenAI both appear close to stronger models, with Anthropic’s leaked ‘Claude Mythos/Capiara’ described as a ‘step change’ beyond Opus. The draft post claims dramatically higher scores in coding, academic reasoning, and cybersecurity, while Sam Altman told staff OpenAI’s next major model, code-named ‘Spud,’ could arrive in weeks and ‘accelerate the economy.’

  • Meta’s court loss matters less for the $6 million payout than for the precedent that social platforms can be liable for harms caused by product design. Alex frames the California verdict — plus Meta’s separate $375 million loss in New Mexico — as a potential Section 230 boundary test that could trigger thousands more cases and eventually force Supreme Court review.

  • OpenAI shelving its erotic chatbot plans is another sign the ‘side quest era’ is over. The hosts joke about the death of “adult mode,” but the underlying point is serious: OpenAI is cutting products that create legal, reputational, and strategic drag as it refocuses on core enterprise, coding, and agentic systems.

The Breakdown

Sora’s real problem wasn’t demand — it was strategy

Alex opens with the headline that OpenAI is winding down Sora’s consumer app, API access, and video features inside ChatGPT, despite Sora recently hitting No. 1 on the App Store. He and Ranjan joke about their best prompts — a chicken and horse circling a toilet, Jake Paul helping an old lady cross the street, a cat with a shotgun shooting a Ring doorbell — but the joke lands because that was the problem: fun, viral, expensive, and not obviously a durable business.

Greg Brockman’s explanation: Sora lives on the wrong tech tree

Alex says after meeting Greg Brockman in San Francisco, he learned this wasn’t mainly a consumer-vs-enterprise decision. Brockman’s key line: Sora’s world-model/video systems are “a different branch of the tech tree” from the GPT reasoning line, and OpenAI can’t aggressively pursue both without slowing the branch it now believes matters most. The hosts treat that as a pretty stunning admission: world models may be trendy, but OpenAI is choosing focus over hype.

The race is narrowing into one big agent battle

From there, the conversation zooms out: OpenAI and Anthropic are no longer playing different games. Alex says both now seem to want the same thing — an AI with access to your desktop, phone, tools, and persistent memory that can actually act on your behalf — while Ranjan notes Sierra, Notion, Intercom, Cursor, and Writer are all circling the same prize. The old split of OpenAI = consumer flair and Anthropic = enterprise coding is giving way to a direct collision.

Why trust is still the giant blocker

They get concrete about where this could go: Alex imagines an AI that negotiates with insurers or monitors your health data, and Ranjan boils the category down to three ingredients — always on, connected to your data, and able to take action. But both acknowledge the biggest obstacle is trust; Ranjan admits he’ll accept AI-generated email drafts based on his Gmail history, yet still won’t let it hit send automatically. That hesitation, they suggest, is exactly where adoption will be won or lost.

New model buzz: Anthropic’s Mythos and OpenAI’s hilariously named Spud

A leaked Anthropic document introduces “Claude Mythos” or “Capiara,” a new tier above Opus that the company says is its most capable model yet and a “step change” in performance. Alex’s bigger point is that progress may feel incremental in each release note but still compound into something dramatic over time, while Ranjan says that’s probably the right lens — the tech is accruing value faster than the marketing language can honestly describe. Then they pivot to OpenAI’s next model, code-named “Spud,” and spend a delightful amount of time roasting the least intimidating codename in AI history.

Apple may be opening Siri without actually fixing Siri

On Apple, Bloomberg reports Siri will let rival App Store AI assistants plug into the system in iOS 27. Both hosts’ reaction is basically: this is not the Siri comeback story people want. Instead of Siri becoming genuinely smart, this sounds like a slightly more integrated version of today’s awkward ChatGPT handoff — plus a possible new way for Apple to take a cut of AI subscriptions.

Meta’s loss could matter because it cracks the Section 230 shield

The show then turns to Meta and YouTube being found negligent in a California case brought by a now-20-year-old woman who said their products were as addictive as cigarettes or digital casinos. Alex says the damages — $4.2 million for Meta and $1.8 million for YouTube — are less important than the precedent: courts are starting to say the harm may come from platform design itself, not just user content. Ranjan leans into the algorithm point hard, arguing reverse-chronological feeds would have changed everything, and even tosses out the scorching theory that Twitter’s 2016 shift to algorithmic ranking helped shape today’s political climate.

Side quests, erotic chatbots, and a very unserious funeral

They close with a mock eulogy for OpenAI’s shelved erotic chatbot plans, citing the Financial Times report that internal concerns about unhealthy attachment and minors helped kill the feature. Beneath the jokes, the message is consistent with the whole episode: OpenAI is stripping away weird, risky experiments as it heads toward a tighter product strategy. Naturally, they still manage to end on a deeply cursed startup-name riff, because this is Friday and they can’t help themselves.