AI Has Driven The Cost of Ideas to Zero – Terence Tao
TL;DR
AI makes ideas cheap, not science easy — Terence Tao says AI has pushed idea generation “down to almost zero,” like the internet did for communication, which shifts the bottleneck from producing theories to verifying and evaluating them.
Peer review was built for scarcity, not a flood of machine-generated papers — Tao notes journals are already being overwhelmed by AI submissions, and systems designed to filter a limited number of amateur or low-signal ideas are breaking under massive-scale generation.
The real unsolved problem is spotting the one big idea among millions of decent ones — Dwarkesh uses Claude Shannon’s “bit” as the example: among many Bell Labs-era papers on signal engineering, one concept ended up reshaping probability, computer science, and beyond.
Time and adoption matter as much as technical merit — Tao argues that many important ideas, like deep learning, looked fringe or controversial at first and only became obviously valuable once other researchers extended them and the world built around them.
Standards win partly because society converges on them, not because they’re uniquely optimal — Tao points to decimal notation, binary over ternary logic, and transformers as cases where lock-in and cultural momentum matter, making it hard to grade ideas “objectively” in isolation.
This may be a bad fit for simple reinforcement-learning-style evaluation — because an idea’s value depends on future context, culture, and downstream adoption, Tao suggests scientific importance may never be something you can score cleanly like a localized optimization problem.
The Breakdown
AI Crashes the Price of Idea Generation
Tao opens with the core analogy: AI has done to ideas what the internet did to communication — driven the marginal cost to nearly zero. He’s excited about that, but immediately adds the catch: abundance of ideas does not automatically create abundance of knowledge.
Science’s Old Filters Are Getting Swamped
He says science historically dealt with low-quality theorizing by building walls: peer review, journals, and publication systems that tried to isolate high-signal work from noise. That model assumed idea scarcity; now AI can produce explanations at massive scale, and human reviewers are already getting buried by AI-generated journal submissions.
The Bottleneck Moves From Creation to Validation
What matters now, Tao says, is verification, validation, and deciding which ideas actually move a field forward versus which are dead ends or red herrings. Scientists can debate one paper over a few years and eventually form consensus, but that process completely breaks when you’re generating “a thousand of these every day.”
The Hard Part: Finding the Next “Bit”
Dwarkesh sharpens the question with a Bell Labs-era example: amid lots of papers on pulse code modulation, analog wires, and engineering constraints, one idea — the bit — had consequences far beyond its original niche. The challenge in an AI-rich world is figuring out how to identify the next unifying concept when millions of papers may all show some local progress.
Great Ideas Often Look Unimpressive at First
Tao responds that a lot of this is “the test of time.” He points to deep learning, which spent years as a controversial, niche part of AI because getting answers through training on data instead of first-principles reasoning did not initially look like the obvious winning path.
Standards Aren’t Purely Objective Winners
He then broadens the point with examples: binary won over possible ternary systems, transformers became the foundation of modern LLMs, and base-10 notation stuck not because 10 is metaphysically special but because everyone standardized around it. In another universe, he suggests, a different architecture or numbering convention might have become dominant.
Why Scientific Value May Resist Clean Scoring
That leads to his final caution: you cannot grade a scientific idea in isolation without knowing the historical context behind it and the future ecosystem that might adopt it. Because value depends on culture, timing, and downstream uptake, Tao thinks this may never be something you can reinforcement-learn in the neat way you can for more localized problems.