Top OpenAI Engineers are Quitting (And You Won't Believe Where)
TL;DR
Top OpenAI talent is leaving for a stealthier bet: Periodic Labs — Dylan flags data from Crust showing former OpenAI employees landing at Periodic Labs, an under-the-radar company backed by NVentures, Accel, Andreessen Horowitz, Jeff Bezos, Eric Schmidt, and Jeff Dean to build an “AI scientist” that runs hypotheses through autonomous real-world labs.
AI image restoration can quietly become AI image replacement — In Vlad Testov’s film-negative experiment, the model didn’t truly recover the original photo; it recognized objects like hands, trees, and clothing and regenerated a plausible version, even turning a child’s hand into an adult’s, which highlights the difference between restoration and probabilistic reconstruction.
Messy open-source repos may attract more AI coding agents than clean ones — Citing Andrew Nesbitt, Dylan says projects are getting flooded with AI pull requests when issues are vague, dependencies are stale, and the repository looks unfinished, because agents “thrive on ambiguity” and treat every loose end as something to fix.
AI can now reverse engineer old software like it’s reading a fossil record — Marco Costrotosis pointed AI at unlabeled 1986 arcade binaries and got a full reconstruction of the game in an afternoon, including processor mapping, copy-protection decoding, hidden messages, and all 100 levels after the system figured out the level data was split across two files.
Anthropic’s 81,000-person global survey shows people want AI for practical self-improvement, not just novelty — Across 159 countries and 70 languages, users mostly wanted help with professional excellence, life management, learning, health, and financial security; 81% said current tools already helped, while top concerns remained unreliability, jobs, misinformation, governance, privacy, and weaker human agency.
DeepMind’s AGI framework reframes the debate from a finish line to a ladder — Instead of asking whether AGI has “arrived,” Dylan highlights DeepMind’s grid of performance levels—competent, expert, exceptional, superhuman—crossed with generality, arguing current systems are still early because they’re broad but uneven and not yet reliably human-level across many tasks.
The Breakdown
A Matrix-style future where purpose gets industrialized
Dylan opens on a dystopian-but-funny video: by 2030, people lose jobs, then get sold back “purpose” by powering machines with their bodies. What sticks with him is the inversion — the less physical capability people need, the more they’ll want to project capability, like outsourcing fitness the same way we now outsource writing to AI.
The film negative experiment that didn’t restore a memory — it rewrote it
He gets pulled into Vlad Testov’s Medium post about using AI to convert photo negatives into finished images. The eerie part is that the model didn’t act like a darkroom; it broke scenes into semantic chunks and rebuilt them from priors, making believable but wrong details, including a child’s hand that came back looking adult. Dylan’s point is that the image may “look right,” but the original moment is gone.
Robot stand-up and the weird future of anonymous trolling
A viral clip of an IRL comedy robot doesn’t really win him over. He suspects there’s probably a human pilot or at least an LLM set to something like “unhinged mode,” and says comedy feels like one of the last things he’d choose to watch a robot do. The sharper observation is about social risk: robots could say the kinds of brutal jokes people wouldn’t attach their own names to.
How to lure AI coding bots: make your repo worse on purpose
From Andrew Nesbitt’s post, Dylan shares one of the strangest dev hacks in the video: messy projects attract AI contributors. Clean issues with obvious fixes don’t leave enough room for agents, but a vague line like “something feels off in this repository” invites them to invent both problem and solution. His summary is memorable: human contributors like clarity; AI bots like ambiguity.
AI pointed at 1986 binaries and reconstructed a lost game world
He’s genuinely dazzled by Marco Costrotosis’s experiment with raw arcade files from 1986. The AI didn’t just document them — it mapped processors, cracked old copy protection, extracted sprites and sounds, uncovered hidden conditions, and eventually rebuilt all 100 levels after discovering the design data was split across two files. Dylan treats it as proof that software is just patterned data, and AI is getting shockingly good at reading it directly.
What 81,000 people actually want from AI, and what scares them
Anthropic’s global survey lands because it’s broad: 80,000-plus users, 159 countries, 70 languages. Dylan says the results feel grounded — people mostly want AI to reduce busywork, help with decisions, free time for real expertise, and make them healthier, more capable, and more secure. But the fear stack is just as real: hallucinations, job loss, misinformation, governance, privacy, and the possibility that people themselves get mentally weaker.
DeepMind’s AGI ladder gives Dylan a new mental model
He compares the paper to “Allen’s conservative countdown to AGI,” but says DeepMind’s framework has more weight because it comes from a world-class research group. The key move is to stop treating AGI like a single finish line and instead rate systems on performance and generality, with labels like competent, expert, exceptional, and superhuman. That lands for him because a chess engine can be superhuman yet narrow, while chatbots are broad but still patchy.
Periodic Labs, value-training AI, stress-aging, jellyfish clocks, and freezing brains
The back half turns into a fast-moving set of future-shaping ideas: ex-OpenAI talent heading to Periodic Labs to build an AI scientist tied to autonomous physical experimentation; Bay Area animal-welfare researchers trying to instill moral concern for animals into future AI systems before those systems hold real power; and a study showing one “hassler” relationship can add about 9 months of biological aging. He then riffs on jellyfish that keep time without standard clock genes as a possible analogy for more coherent AI timing systems, and closes on vitrified mouse brain tissue that regained neural activity after a week frozen — not revival, but enough to make cryostasis feel a little less like pure sci-fi.