OpenAI President Greg Brockman: AI Self-Improvement, The Superapp Bet, Path To AGI, Scaling Compute
TL;DR
Brockman says AGI is basically 2 years away — He puts OpenAI at “70–80% there” already and predicts that within the next couple years the floor for “almost any intellectual task” on a computer will be handled by AI, even if the result is still “jagged.”
OpenAI is narrowing its bets to two products: a personal assistant and an agent for hard knowledge work — Brockman says compute is so constrained that OpenAI can’t fund everything, which is why GPT-style reasoning gets priority over Sora-style video generation despite calling Sora “incredible.”
The ‘super app’ is OpenAI’s big product thesis — Brockman describes a unified app that combines ChatGPT, Codex, browser use, memory, email/calendar context, and computer control so “anything you want your computer to do, you can ask it.”
OpenAI’s real bottleneck isn’t ideas — it’s compute — Brockman says after ChatGPT launched he told his team to buy “all of it,” and argues the $110 billion infrastructure push is justified because demand for coding agents, personal agents, and enterprise knowledge work already outstrips available GPUs.
He thinks coding crossed a major trust threshold in December 2025 — Brockman says models jumped from doing “20% of your tasks to like 80%,” citing a long-running website-building prompt that once took months to code by hand and now works in one shot.
OpenAI wants AI to help build better AI, but not without human oversight — Brockman describes an automated AI researcher coming this fall that could do the full workflow of a research scientist “in silicon,” while stressing it should be managed more like a junior researcher than left alone.
The Breakdown
Why OpenAI is walking away from Sora-for-now
Brockman frames the shift not as “consumer to enterprise” but as a forced prioritization moment: the tech is finally useful enough in the real world that OpenAI has to choose where limited compute goes. The top two bets, he says, are a personal assistant and an AI that can solve hard problems for you — and there isn’t even enough compute to fully fund those, let alone everything else.
The tech-tree split: GPT reasoning wins over world models
He leans on a vivid metaphor here: Sora and GPT sit on different branches of the tech tree, and maintaining both at full speed is too expensive. Brockman acknowledges Demis Hassabis’s argument that image/video systems may feel closer to AGI, but says OpenAI is betting on the GPT branch because it now has “line of sight” that text-and-reasoning models go all the way to AGI.
The super app: one interface for your whole digital life
Brockman’s big product vision is an “endpoint application” that merges ChatGPT, Codex, browsing, memory, and computer use into one place. His pitch is simple and ambitious: your computer should contort to you, not the other way around — whether that means writing a wedding speech, fixing hot-corner settings on a laptop, or using your email and calendar context to act on your behalf.
Codex stops being ‘for coders’
One of the liveliest moments comes when Brockman lights up at the story of Codex helping someone build an Adobe Premiere plugin for video editing. He says the surprise inside OpenAI is that even with terrible usability for non-programmers, people who’ve “never programmed before” are already using it to build websites, automate workflows, and synthesize Slack and email feedback.
The healthiest thing for OpenAI? Competition
Brockman says the scariest moment at OpenAI wasn’t a setback — it was the holiday party after ChatGPT launched, when he felt a dangerous vibe of “we won.” He hated it, insisting OpenAI has to feel like the underdog, and says the rise of competitors has actually been healthy because it snapped the company back into focus and killed the sense of complacency.
What the next model wave changes
He won’t say much about “Spud” beyond calling it a new base model, but he claims OpenAI has “two years worth of research” coming to fruition. His description of the payoff is less benchmark-y than experiential: fewer moments where the AI disappoints you, more “big model smell,” better instruction-following, much longer time horizons, and more open-ended problem solving — including a story where a physicist gave the model a hard problem and got a solution 12 hours later.
AI improving AI — and the start of takeoff
Brockman describes “takeoff” as the point where AI speeds up its own development and also becomes the main driver of economic growth. That’s where the automated AI researcher fits in: not a magical “go find AGI” black box, but a system that can do the end-to-end work of a research scientist while humans provide direction, judgment, and review.
Why Brockman is all-in on compute, despite the backlash
He’s blunt that OpenAI’s biggest expense is compute and that the company has never been able to buy enough of it. Brockman compares GPUs to hiring more salespeople — not a cost center but a revenue center — and says the enterprise demand curve plus visible model improvements make the infrastructure bet obvious, even as he pushes back on fears about data centers, public distrust, and claims that OpenAI is “yoloing” too hard.