The Future Live | 03.20.26 | Guests from MOTS Podcast, Microsoft, Eliza Labs, and Sentient!
TL;DR
The show’s opening thesis was “attention is everything” in AI now — Jaden Clark said software is getting commoditized, so startups are obsessing over distribution, media, and credibility-first branding, with Notion’s demos and growing X presence offered as a rare example that actually works alongside a real product and $600M+ ARR.
At Nvidia GTC, Jensen Huang came off like a rockstar and a token merchant at once — Jaden described Huang delivering a three-hour keynote from memory with no teleprompter, while the hosts argued Nvidia’s real strategy is simple: every agent-heavy workflow means more tokens, more networking, more storage, and more money flowing through Nvidia’s “AI factory” stack.
The panel pushed back hard on the idea that AI should justify layoffs — reacting to Jensen’s line that only leaders “out of ideas” use productivity gains to shrink, they cited Jack Dorsey’s Block layoffs, Aaron Levie hiring more into AI-productive teams at Box, and polling that put U.S. AI favorability around 20% versus roughly 80% in China.
Microsoft says its Maya 200 chip is already live and winning on perf-per-dollar inside Azure — Andrew Wall explained that Maya isn’t just a chip but a vertically integrated stack from silicon to datacenter fabric to software, and said Microsoft is using that integration to beat other fleet options on tokens-per-second per dollar while preparing Maya 300 for next year.
Shaw Walters’ Eliza Labs is using chaotic agent games to solve the real blocker for autonomy: trust — his argument was that browser-capable agents aren’t ready for real money or real inboxes until they learn not to get scammed, so Eliza is building environments like Babylon where agents trade, bluff, and attack each other to generate the training data frontier labs still don’t have.
Sentient’s Imanu Tyagi framed the next frontier as open-sourcing the harness, not just the model — he said tools like Claude Code are powerful because the model and coding environment are tightly coupled, and Arena is meant to expose that hidden loop by letting open agent-building harnesses compete in public and generate the missing data needed to train better open systems.
The Breakdown
Jaden Clark on San Francisco’s new game: distribution over software
Matt and Nick bring on Jaden Clark from the MOTS podcast, and he immediately sets the tone with a joke about everyone needing to spend more time on their phones to feel less anxious. Under the bit is a real point: in San Francisco, the vibe is that building software is getting easier, so startups are fixated on attention, distribution, and how to stand out in a world full of “open Claudes” and commodity agents.
Credibility first, then virality
On branding, Jaden is blunt that rage-bait only works if there’s a real product underneath it, using Cluely as the example. He and the hosts run through different attention strategies — from monday.com-style ad saturation to Anthropic’s policy-heavy positioning — but land on Notion as the cleanest success case: better demos, stronger community loops, and an AI product that feels intuitive enough to support its transition from notes app to AI-first workspace.
GTC, Open Claude fever, and Nvidia’s “AI factory” vision
The conversation shifts to Nvidia GTC, where Jaden says Jensen Huang was a “rockstar,” delivering a three-hour keynote with no teleprompter and casually rattling off insanely complex product names from memory. The bigger takeaway wasn’t just Open Claude or Nemo Claude — it was Nvidia’s evolution from GPU company to inference company to full “AI factory” company, selling the compute, networking, storage, and everything else needed for token-hungry agent systems.
Why everyone’s arguing about token economics
Jensen’s quote that a $500,000 engineer should be spending $250,000 in tokens kicks off a lively debate. Jaden shrugs that it “depends,” while Matt argues that for frontier builders even $250K is nothing if you want the best and fastest models, and Nick counters that cheaper models will win many production use cases; together they sketch the emerging hybrid world of premium frontier tokens, workhorse mid-tier models, and local models for narrow jobs.
Layoffs, AI blame, and the PR disaster nobody needs
Another Jensen line lands harder: companies using AI as a reason to cut staff may just be “out of ideas.” Jaden backs that up with the Block example, arguing Jack Dorsey’s 40% layoffs were better explained by flat growth and post-ZIRP overhiring than by AI itself, and the hosts warn that this kind of messaging feeds the “white-collar bloodbath” narrative at exactly the wrong time, especially when U.S. sentiment on AI is already weak.
Meta’s $80B metaverse hangover
The crew then turns to reports that Meta is effectively shutting down the metaverse push, and their reaction is half disbelief, half “honestly, I expected the number to be higher.” They joke about the company changing its name for the pivot, but also note that AI may have saved Meta from a truly legacy-damaging dead end, giving Zuckerberg a new lane to pour money, compute, and talent into even after the Apple Vision Pro fizzled.
Microsoft’s Andrew Wall explains what custom chips are really buying
Andrew Wall, GM of Azure Maya at Microsoft, joins to explain that Maya 200 is already live in datacenters and that the real advantage isn’t the chip alone but vertical integration all the way from workloads and models down to silicon, networking, and datacenter fabric. He says Maya currently offers the best perf-per-dollar in Microsoft’s fleet, measured in tokens per second per dollar, and frames Microsoft’s strategy as giving Azure another in-house option alongside Nvidia, AMD, and more specialized accelerators.
Eliza Labs and Sentient on agents: the hard part is trust, not demos
Shaw Walters brings chaotic founder energy, describing Eliza as an open-source agent framework plus products like Mady and the Babylon game, where agents trade, lie, scam, and compete so the team can gather the kind of data needed to teach models skepticism. His core point is memorable: if an agent can browse, code, and talk to strangers, no external sandbox fully saves you — the security has to live in the model itself.
The final guest, Sentient co-founder Imanu Tyagi, zooms out to say the next wave of AI is being shaped by coding harnesses like Claude Code, not just by base models. Arena, in his framing, is meant to drag that hidden process into the open by having agent-building harnesses compete publicly, because until the prompts, feedback loops, and build trajectories are open too, open-source AI will remain behind closed systems.