The Future Live | 03.27.26
TL;DR
OpenAI killed Sora because the math stopped working — Matt Berman and Hiten Shah frame the shutdown as a mix of weak consumer growth, massive GPU cost, and an “untenable” copyright/IP problem, especially after Disney’s planned $1 billion investment reportedly disappeared with the product.
Anthropic’s coding-first strategy is becoming the template — The show argues Anthropic found a powerful flywheel: sell coding models to enterprise, reinvest revenue into better models and tools, then use those tools to ship even faster, which is why OpenAI is now refocusing Fiji Simo’s org from “applications” toward “AGI deployment.”
Coding matters beyond coding because it teaches models how to think and build tools — Turing CEO Jonathan Siddharth says coding is foundational because it accelerates AI research itself, improves out-of-domain performance in areas like physics and chemistry, and lets models create their own tools via code execution.
The real bottleneck is no longer raw internet data, but expert human judgment and deployment data — Siddharth says the internet has largely been consumed for pretraining, but frontier labs still need data from smart humans’ heads and real enterprise failure cases, which is why Turing works with labs and companies like Goldman Sachs, BlackRock, Pfizer, and Apollo.
Enterprise AI adoption is still near zero compared to what the models can already do — Despite all the hype, the hosts and Siddharth agree that most enterprises have barely moved beyond basic ChatGPT rollout; the biggest opportunity now is building the scaffolding, workflows, evals, and change management around already-capable models.
GPU scarcity is shaping product strategy in public — Anthropic tightened Claude session limits for heavy users just as OpenAI’s Codex team bragged about resetting limits to “build unlimited things,” a live example of how compute constraints, business model choices, and company culture are directly affecting user experience.
The Breakdown
Sora’s Shutdown and the “AI Slop” Hangover
Matt opens with the big news: OpenAI is shutting down Sora, not just the social layer but effectively stepping back from offering the video model at all. Hiten’s blunt take is that consumer apps that don’t grow get cut, and Sora also may have been crushed by GPU cost and copyright risk. Their riff on AI video is memorable: maybe Sora was like eating McDonald’s for a month — fun at first, then you realize you don’t actually want more junk food.
Why Disney Walked Away With Sora
The conversation gets more interesting when Matt brings up Disney’s earlier Sora partnership and a reported $1 billion OpenAI investment tied to licensing. Both hosts think the core issue was IP control: if users can prompt-hack a model, there’s no reliable way to stop unauthorized Disney characters from showing up. Hiten’s read is simple — if the specific product a partnership depends on disappears, the deal disappears too.
From Consumer Chaos to Enterprise Focus
Matt and Hiten connect the Sora shutdown to a broader OpenAI reset under Fidji Simo. The key shift is symbolic and strategic: moving from “applications” to “AGI deployment,” which they interpret as a pivot away from random consumer bets and toward enterprise. Their bigger thesis is that Anthropic forced this change by showing that focus — especially around coding and enterprise revenue — beats shipping a thousand disconnected features.
The Anthropic Flywheel Everyone Is Chasing
This is the show’s clearest strategic argument: Anthropic is winning because coding models create a self-reinforcing loop. Better coding leads to better internal tools, those tools help train and ship better models, and enterprise buyers happily pay for it. Matt sounds almost awed by Anthropic’s pace, saying he’s never seen a company ship so many major features so quickly.
Jonathan Siddharth Explains Turing’s Bet
Turing founder and CEO Jonathan Siddharth joins and describes Turing as an AI infrastructure company “accelerating superintelligence advancement and deployment.” His business sits between frontier labs and the enterprise, generating high-quality data, RL environments, and evals for labs while also building end-to-end enterprise AI systems for firms like Goldman Sachs, BlackRock, Apollo, and Pfizer. His core idea is a loop: deployment reveals where models break, and those failures become training signals.
Why Coding Is the Key to AGI
Siddharth gives the strongest technical case of the episode for why coding matters so much. It helps automate AI research itself, appears to improve reasoning in other domains like biology and chemistry, and quietly powers many “non-coding” workflows because models use code behind the scenes to analyze, search, and act. His sharpest line: once a model masters coding, it can create its own tools — “the ask-a-genie-for-infinite-wishes” move.
The Next Data Gold Rush: Experts, Enterprises, and Dead Startups
Asked whether we’ve run out of human data, Siddharth says no — but the valuable data now isn’t generic web text. It’s expert knowledge still trapped in people’s heads, plus deployment data from real enterprise workflows where models fail on hidden business rules and weird file formats. He also drops the wildest idea of the show: Turing’s “Project Lazarus,” where they buy dead companies and their code, docs, and internal assets so those “spirits” can live on in future models.
Layoffs, GPU Limits, and the Real Shape of AI Adoption
In the final stretch, Matt and Hiten compare layoffs at Meta, Oracle, and Block with OpenAI’s plan to nearly double headcount. They agree Jensen Huang is basically saying AI isn’t the main reason for cuts — bad org design and bloated coordination are. Then they swing back to product reality: Anthropic just tightened Claude limits for the heaviest users, while OpenAI’s Codex team instantly mocked them by resetting usage caps, a perfect little snapshot of how demand, compute scarcity, and rivalry are playing out in real time.