Alea
Back to Podcast Digest
AskwhoCasts AI··25m

Consider chilling out in 2028 - by Valentine

TL;DR

  • Valentine proposes a 2028 “stop-loss” on AI doom rhetoric — if by January 2028 the world still feels “basically like it does today,” he argues LessWrong-adjacent communities should pause and seriously reconsider whether their doom-first framing is misleading people.

  • He compares AI doom discourse to an emotional “shepherd tone” — a friend’s metaphor for something that always sounds like it’s intensifying even if it’s actually looping, which he says captures years of “AGI is just around the corner” urgency.

  • His core hypothesis is psychological, not dismissive of risk — real AI concerns may be getting amplified by unresolved trauma, where people use genuine existential arguments as a “microphone” for deeper fear, making the problem feel permanently maximally dire.

  • He says the movement’s long-running strategy has mostly been to frighten people into action — over roughly two decades, that has meant donations, recruiting alignment talent, and viral warnings like AI 2027 and Eliezer/Nate’s If Anyone Builds It, Everyone Dies, but he worries this approach causes burnout and may even accelerate timelines.

  • The older rationalist scene had a positive vision that today’s AI risk culture lacks — he contrasts 2011-era excitement around CFAR, MIRI, meetups, and “raising the sanity waterline” with today’s orientation toward “AI not kill everyone-ism.”

  • His alternative is not complacency but a better target — instead of organizing around “don’t hit the doom tree,” he wants the community to articulate what AI going well would look like, drop contemptuous frames like “NPC” and “normie,” and approach the rest of humanity as same-sided collaborators.

The Breakdown

The 2028 pause button

Valentine opens with the conclusion first: if we reach 2028 and things still feel broadly like they do now, people should “pause and seriously reconsider” the fixation on AI doom. He’s not rejecting current efforts — he explicitly says AI 2027 and the Eliezer/Nate book have momentum, and he even pre-ordered the latter — but he wants a checkpoint in 31 months where the community asks whether the whole emotional posture has been off.

Doom as a looping emotional pitch

He reaches for a great metaphor from a friend: AI threat discourse can feel like a “shepherd tone,” always rising, always getting more intense, but maybe actually looping. That frames the whole talk: not “nothing has changed,” because AI has obviously advanced, but “why does the feeling of emergency sound so structurally familiar year after year?”

The trauma hypothesis, stated carefully

From there he goes personal and psychological, bringing in his parents’ memories of earlier end-times panics — unbreathable air by the 1970s, population collapse, Y2K, 2012. His father’s theory was that people project fear of mortality onto the world; Valentine updates that into a more specific picture where smart people with preverbal trauma may latch onto real problems and inflate them into existential horror, not because the problems are fake, but because the inner system is “optimizing for effect, not for truth.”

Real danger, plus a “motte-and-bailey” of feeling

He’s explicit that this mechanism could coexist with genuine AI risk, which is what makes it slippery. Attempts to name the emotional layer get pushed away because the factual danger is real, so the emotional amplification hides inside it — what he calls an emotional analog of a motte-and-bailey. His point isn’t “heal trauma and ignore AGI,” but “if vision is distorted, emotional work may be prerequisite to seeing the problem clearly.”

Why the scare-everyone strategy burns people out

Next he targets the community’s default tactic: frightening people into action through donations, recruiting, and escalating warnings. That makes sense if the house is on fire, he says, but not if the house is “slowly sinking into quicksand over the course of decades” — you still die, but now everyone is terrified, exhausted, and morally compromised along the way. He links this to burnout, Machiavellian tactics, calling people NPCs, and even the possibility that some alignment efforts ended up accelerating AI.

Remember when rationalism had a future to run toward?

The middle of the talk turns nostalgic. He recalls entering the rationality community in 2011, when New York meetups were thriving, MIRI was still SIAI, CFAR was emerging, and people had an upbeat if vague vision of becoming more sane, more capable humans who could “bless the universe with love and meaning.” His complaint isn’t just that one version failed; it’s that nothing comparably positive replaced it.

Don’t focus on the marker — hand people the whisk

He uses a vivid parenting metaphor: when a toddler grabs a toxic marker, “don’t do that” works worse than taking it away and offering a colorful whisk. Likewise, communities shouldn’t organize all their attention around “AI not kill everyone-ism.” He argues that aiming at a good future works better than endlessly staring at the doom tree, and he points to examples like CFAR’s indirect role in inspiring Elon Musk to create OpenAI as a cautionary tale about what doom-oriented steering can accidentally produce.

A hug, the normies, and the 2028 falsification test

One of the strongest human moments is a story from a 2015 circling retreat, where a facilitator basically said, “You seem upset, man. Can I give you a hug?” Valentine started crying and realized the guy had addressed something deeper than the explicit x-risk arguments. He ends by asking for a high-integrity 2028 review: make AI 2027 less unfalsifiable, distinguish “the prediction changed the future” from “the prediction was distorted,” and if the emergency still just feels permanently self-renewing, treat that as a signal to adopt a humbler, more cooperative, more hopeful stance toward the rest of humanity.