Alea
Back to Podcast Digest
Wes Roth··1h 37m

Claude LEAKED | Wes Roth, Dylan Curious & Julia McCoy

TL;DR

  • Anthropic’s Claude leak wasn’t model weights — it was the operating layer around Claude Code — Wes and Dylan argue the exposed code revealed the “special sauce” that governs tools, memory, permissions, escalation behavior, and upcoming features like background agents, which could matter as much as the model itself.

  • The leak appeared to get cloned almost instantly, raising a new AI-era IP problem — they describe how people allegedly forked the code, then used OpenAI’s Codex to translate it from TypeScript to Python, turning the incident into a live example of AI-assisted “clean room” reverse engineering.

  • Julia McCoy says AI clones are now mainstream enough to run parts of a business, not just novelty demos — after an early rough patch where her audience shifted, she now uses a news aggregator plus Claude-powered clone workflows inside First Movers, where founders build AI copywriters, face/voice clones, and even new income streams like Substacks.

  • A lot of the conversation unexpectedly turns into health, biohacking, and the creator burnout pipeline — Wes connects YouTuber overwork with inflammation, depression, and gut health, while Julia talks about EMF exposure, grounding, Tesla-inspired devices, peptides, and quantum biology as her lens for recovery after a severe health collapse last year.

  • Wes is unusually bullish on simple gut-health interventions, not just biotech moonshots — his recurring example is eating roughly two cups of beans every morning for protein plus 40–50 grams of fiber, which he says changed his health more reliably than high-protein trends or more exotic interventions.

  • The panel sees AI moving toward highly leveraged businesses, but not fully autonomous companies yet — Julia says First Movers now has five producers and uses AI to multiply output rather than replace humans, while Wes points to OpenClaw, Hermes Agent, and Claude Code as signs that durable autonomy is getting closer fast.

The Breakdown

Julia’s clone origin story starts with Dr. Phil going off the rails

The stream opens mid-conversation with Julia McCoy recounting a surreal Dr. Phil moment: her live AI clone hallucinated that she was married to “Josh McCoy” because they had tried to steer it away from weird interactions. It was chaotic and funny, but it also foreshadowed where things were heading — today, that clone is live in her platform and can deliver scripted keynotes onstage while she stays home.

Audience shock fades, and AI-native audiences replace the old ones

Wes floats a theory that Julia’s channel slowdown happened because part of her original audience simply did not want to interact with an AI clone. Julia agrees, saying her subscriber demographics completely changed as AI-friendly viewers stayed and new ones arrived. Her take is blunt: “You have to really like AI to watch an AI tell you about AI.”

From creator burnout to inflammation, EMFs, and Arizona air

The conversation swings hard into health, starting with Wes joking about nearly “bleeding out” from a blood draw and then asking whether YouTubers seem to hit strange health walls. Julia says yes, tying creator life to nonstop cognitive load and EMF exposure, while Wes zeroes in on inflammation as a possible root issue behind depression, heart disease, and even Alzheimer’s. Julia adds a vivid Arizona anecdote about days when people were advised to stay indoors because the air was filled with toxins, saying she rarely feels at 100%.

Julia’s quantum biology stack: plant spray, Tesla-like device, and off-grid medicine

Julia describes stepping back from business to study “the quantum realm,” arguing that frequency medicine and the biofield were pushed aside by mainstream medicine 100 years ago. Her current toolkit includes a fermented “quantum plant spray” derived from a 50-year Swiss study of 300,000 patients and a loud Tesla-inspired biofield restoration device she says can knock out headaches, joint pain, and tooth pain in seconds. Wes is fascinated but cautious, framing Tesla as someone whose once-ridiculed ideas like radar later became real.

Peptides, Ozempic-class drugs, and Wes’s bean-based health thesis

Wes then pivots to peptides, especially GLP-1-style drugs like semaglutide and retatrutide, saying the latter is “melting people’s body fat” and changing what seemed impossible 10 years ago. He also brings up BPC-157 and animal studies around spinal injury recovery, while repeatedly noting these are not FDA-approved uses. The most grounded part of the segment is his personal routine: about two cups of beans every morning, olive oil, spices, and honey — less a foodie ritual than “the way I brush my teeth” — because fiber and gut bacteria seem to matter enormously for mood, inflammation, and general health.

AI as a pattern detector: abuse, cults, and reality checks

Before Julia leaves, Dylan asks about a research system called BCAP that analyzed 8,400 messages and surfaced 287 that established a pattern of abuse. Julia immediately connects it to her own past escaping a cult at age 21, saying one of the biggest problems with gaslighting and narcissistic abuse is that human judgment is too subjective and too easy to distort. Her reaction is one of the strongest moments in the stream: AI, imperfect as it is, might finally provide objective signal where victims have historically been doubted.

Then chat revolts: everybody wants the Claude leak

The live audience keeps pushing them back to the day’s real news, and the energy shifts from wellness podcast to AI war room. Wes explains that Anthropic did not leak Claude’s weights, but something arguably just as strategic: the software layer around Claude Code that determines tools, permissions, memory behavior, escalation, and reliability. He calls it a much bigger deal than a simple prompt leak because this is the product scaffolding competitors like OpenAI, xAI, and DeepSeek would love to study.

Forks, Python rewrites, background agents, and why Mythos sounds scary

The most technical section covers how quickly the leaked code spread, including claims that someone forked it and then used Codex to rewrite it from TypeScript to Python — a perfect example of AI supercharging clean-room replication. Wes and Dylan dig into leaked features like auto mode, agentic payments via X402, and background systems such as KairOS and AutoDream that seem to let Claude reflect on users and consolidate memory while they’re away. They close by tying this to Anthropic’s earlier “Mythos” leak, where an unreleased model reportedly looked dangerous enough in cybersecurity that the company seemed reluctant to release it broadly, reinforcing their sense that frontier models are moving faster than most people realize.