SUNO 5.5 INSANITY and other AI news...
TL;DR
Suno 5.5’s big unlock is personalization, not just prettier songs — Wes Roth expected a gimmick when Suno teased “we’re going to get personal,” but the real story was voice cloning, custom models, stems, mashups, and a more studio-like workflow that feels aimed at serious creators.
The livestream’s funniest reveal was also the clearest product truth: voice cloning reproduces you, it doesn’t upgrade you — Wes assumed Suno would make him sound better, then heard painfully accurate clones of his rushed Bohemian Rhapsody/National Anthem-style samples and realized the model is built for fidelity, not auto-Ed-Sheeran mode.
Wes and Dylan think AI music has crossed the line where most listeners can’t reliably tell — they cite a study around Suno v4 suggesting average people struggle to distinguish AI songs from human-made ones, while also noting estimates of roughly 420,000 AI songs per week hitting streaming platforms by 2026.
Suno now feels less like a slot machine and more like an instrument — Wes says older versions rewarded brute-force generation until one miracle hit appeared, while 5.5 seems intentionally more steerable, with production and vocal improvements that support artists shaping songs rather than replacing them outright.
The stream accidentally became a live demo of how fast AI can discover commercially viable niches — after joking through styles like “Mongolian throat singing,” “Warcraft orchestral,” and dubstep, they generated “Fast Cars Go Vroom,” a track both hosts immediately treated like a real, playable gym/car song.
The broader AI-news subtext: copyright, creators, and culture are still unresolved, but users aren’t waiting — chat overwhelmingly said they don’t have strong feelings either way about AI music, while Wes argued existing copyright frameworks target reproduction and distribution more than training, making lawsuits against companies like Suno legally messy.
The Breakdown
Schrödinger’s Livestream and Suno’s “Get Personal” Drop
The stream opens in full Wes Roth chaos: they’re not sure if they’re live, joke about “two quantum possibilities,” and then dive straight into Suno’s teaser, “tomorrow we’re going to get personal.” Wes guessed it meant either voice uploads or diss tracks; it turned out to be voice cloning, which immediately set the tone for a very unpolished, very human test session.
A Quick Tour From Suno v3 to v5.5
Wes and Dylan play older Suno generations to show the jump from echoey, “tingy” vocals in v3 to much more polished production in 4.5 and v5. Dylan says bass, guitars, strings, and club-like effects already sounded shockingly good before 5.5, while human vocals remained the main tell. Wes adds that some commenters from the music industry were saying prior versions already sounded close to what you’d get in a studio.
The Voice Clone Test Goes Sideways — Because It Sounds Too Much Like Wes
The big live moment is Wes trying his own cloned voice in Suno, only to discover it doesn’t beautify him — it mirrors him. Dylan twists the knife, saying the model did exactly what it promised and that Wes really wanted “your vocal tones, but to sound like Ed Sheeran.” Wes explains the clone was trained on a rushed, low-quality sample after 20–30 failed attempts, so the result was more “painfully accurate” than impressive.
Why They Think Suno Is a Real Threat to the Music Stack
The conversation zooms out into whether Suno could become a Spotify competitor or even “eat up” most music creation. Wes mentions a study suggesting average listeners already struggle to distinguish AI music from human music, and Dylan cites a rough estimate of 420,000 AI songs per week being submitted to streaming services by 2026. Their point isn’t just quality — it’s scale, plus the weird reality that AI is now helping generate songs and rank which songs are worth hearing.
Copyright, Kanye, and the Weird Appeal of AI Artists
Wes goes on a mini-rant about loving artists like Kanye West musically but hating when the person behind the music makes the art harder to enjoy. That leads to a strange upside of AI musicians: in theory, they don’t self-destruct in public the way human stars do. At the same time, they acknowledge this gets messy fast, because AI characters can still be misused, identity can be faked, and accountability gets fuzzier, not cleaner.
The Legal Angle: Training vs. Infringement
One of the most substantive stretches is Wes arguing that people are forcing AI into old copyright categories that may not fit. He points to past legal logic around copying, search crawlers, and internet infrastructure to say training itself hasn’t historically been treated the same as illegal reproduction or distribution, and he references a recent Sony/Cox-style ruling as a sign that going after AI companies may be harder than critics assume. Dylan’s takeaway: major labels may sue, but it’s still unclear where real leverage comes from.
Other AI News, Briefly: Comedy Robots, Film Negatives, and Bot-Swarming GitHub
In classic livestream fashion, they wander into three other AI stories. Dylan talks about a robot comedian insulting live audience members with the anonymous, Reddit-style boldness humans usually hide behind usernames; a creator using AI to convert old film negatives into plausible images, imperfect fingers and all; and a wild GitHub tactic where developers intentionally leave AI-solvable bugs to attract swarms of coding agents and boost project visibility.
The Stream Finds Its Soul in Mongolian Throat Singing
Then everything derails in the best way. Chat suggests bizarre style mashups — Mongolian throat singing, gothic whimsical undertones, orchestral Warcraft, dubstep — and Suno starts spitting out tracks that genuinely stun both hosts. The peak is “Fast Cars Go Vroom,” which they treat like a legitimate release, replaying it as if they’ve discovered a commercially viable genre by accident. Wes ends with the key product observation: older Suno felt like a casino, but 5.5 feels more intentional, more steerable, and much closer to a real creative tool.