Alea
Back to Podcast Digest
Alex Kantrowitz··32m

Is AI A Privacy Disaster? And How To Fight Back. — With Andy Yen

TL;DR

  • Opting out of AI training does not mean opting out of data collection — Andy Yen says ChatGPT, Claude, and Gemini can still store chats indefinitely, hand them over under subpoena, or leak them in a breach even if you disable training.

  • LLMs are a privacy upgrade for Big Tech, not just a better interface — compared with search, months of back-and-forth chat gives companies far deeper insight into your personality, habits, and vulnerabilities, which Yen describes as “Google’s business model on steroids.”

  • “Incognito” AI modes are better than defaults, but still trust-based — Yen points to Google’s multibillion-dollar incognito settlement as proof that private modes are only as strong as the company’s word, so they’re not remotely bulletproof for things like tax returns.

  • Parents are waking up late to the cost of putting kids online early — Proton’s survey found 70% of kids have smartphone access by age 10, about three-quarters use Gmail, 41% of parents would redo their choices, and 60% wish they could erase their child’s data from major platforms.

  • Yen argues the real problem is the business model, not just the product design — his case is that ad-driven companies are structurally incentivized to profile users and hook kids early, while Proton tries to align incentives through paid privacy services and end-to-end encryption.

  • Proton’s counter-program is to reserve a child’s email identity outside Google for 15 years for $1 — the “Born Private” offer is framed as giving kids a digital passport without automatically creating a lifelong advertiser ID, with the $1 going to the Proton Foundation.

The Breakdown

Why AI chat is more invasive than people realize

Alex opens with a familiar shock: most people using ChatGPT or Claude are opted into training unless they manually switch it off. Andy Yen immediately makes it darker — even if you opt out of training, the provider may still keep the data forever, hand it to governments under subpoena, or expose it later in a hack. His core point: people think they turned off learning, but they did not turn off collection.

“Google’s business model on steroids”

Yen says LLMs are basically a more efficient way for humans to talk to computers, but conversation is exactly what makes them so revealing. Search tells Google what you’re interested in; long-running chats tell it how you think, how you speak, and who you are. He compares it to meeting someone in a pub versus reading their LinkedIn — conversation gives away the real person.

The false comfort of “incognito” chat

Alex asks whether private or incognito chat modes offer real protection. Yen’s answer is blunt: maybe somewhat, but only if you trust the company, and Google’s history gives him no reason to. The exchange lands with a joke about Alex uploading tax returns in a “secret chatbot,” and Yen basically says: yeah, don’t do that.

Kids are entering the surveillance web by age 10

The conversation shifts to children, and Proton’s survey numbers are meant to sting: 70% of kids have a smartphone by age 10, and around three-quarters of them are already using Gmail. Yen says kids have no real understanding of privacy or what OpenAI and Google do with data, while parents are only now feeling the regret — 41% would do it differently, 60% wish they could erase their child’s information, and roughly 80% worry about online privacy.

Why Yen doesn’t trust platform guardrails

Asked about age gates and minor-detection systems, Yen says guardrails may exist but the incentives still point the wrong way. He argues these companies ultimately make money by exploiting data, and he uses an intentionally provocative analogy: social platforms are like neighborhood drug dealers, trying to hook children early so they become profitable later. Alex doesn’t fully buy the crack metaphor, but agrees the “we make less from teens” defense is hardly saintly.

Proton’s pitch: privacy only works if incentives line up

Yen says this is fundamentally a business-model fight. Proton’s answer is Lumo, a private AI assistant built so Proton itself can’t easily inspect past conversations, because, in his framing, the best way to protect data is not to possess it in the first place. He keeps coming back to the same contrast: Google and OpenAI promise privacy while still having financial reasons to collect; Proton survives only if users believe it protects them.

Email as your digital passport — and Google’s lock-in tool

One of the sharper sections is Yen’s argument that email is not just communication, but identity. He says Gmail was so strategically important because it keeps users permanently logged into Google, letting cookies, Google Ads, and Google Analytics stitch together a unified profile across huge swaths of the web. That’s why Proton started with mail: if you want to leave Google’s ecosystem, first you need a non-Gmail identity.

“Born Private,” login identity, and why AI may still open the market

From there, Yen introduces Proton’s “Born Private” program: for $1, parents can reserve a child’s Proton address for 15 years instead of creating what he calls a lifelong advertiser ID through Gmail. He extends the same logic to “Login with Google” or “Login with ChatGPT,” saying those systems become correlation engines that reveal activity across services. The closing note is surprisingly optimistic: even though Big Tech is spending staggering sums on AI, Yen believes open models, Moore’s law, and Nvidia’s incentives will commoditize frontier AI enough that privacy-focused players can stay competitive.