Building Is the Easy Part Now | Mike Krieger on What AI Changed
TL;DR
AI made building cheap, not judgment cheap — Mike Krieger says Claude can now rebuild something like Instagram’s precursor Bourbon in about two hours, but models are still much better at adding features than deciding what to cut.
The new product trap is overbuilding before users ever touch it — Krieger and Dan Shipper both describe “indoor tree” products: fully formed, feature-rich apps built fast with AI that miss the intuition and hardening you only get from real-world use.
Rewrites are back because they no longer cost a year — At Anthropic Labs, teams will build a full V1, realize they overcomplicated it, then tear it down and rebuild in days, not months, which changes how seriously founders should consider starting over.
‘Agent native’ software is the next design frontier — Krieger argues products should know their own primitives and let agents act directly on them; his example was Claude.ai still saying “here are the steps” instead of actually adding something to project knowledge.
Robustness now matters at two levels: systems and prompts — Anthropic still hires for distributed systems and architecture, but also pairs product teams with applied AI experts because a brittle tool graph or a 100-line prompt can fail just as badly as flaky infrastructure.
Small, conviction-heavy teams beat larger AI product orgs early on — Labs bets work best when one person has founder-level obsession with the problem, while too many people too soon creates coordination drag just as products may need to delete half their code every 3-6 months.
The Breakdown
Building got radically faster, but taste still takes time
Krieger opens with a useful distinction: the “building” part of software is now dramatically easier, but knowing what should exist in the product is not. He contrasts Instagram’s original path — a year on Bourbon, then three months to build Instagram — with today, where Claude rebuilt Bourbon in about two hours and even added filters on its own. His point is blunt: models are great at feature addition, not product subtraction.
The ‘indoor tree’ problem of AI-built products
Dan offers a metaphor Krieger immediately loves: a tree grown indoors without wind looks like a tree, but it never develops the strength it would from real exposure. That becomes their frame for modern AI products — you can now build the whole thing in one shot, but you skip the sequence of user contact and incremental decisions that builds intuition. Krieger adds another version: shipping a product can feel like dropping someone into the final episode of a TV show and expecting them to know all the characters.
Why both of them are throwing products away and starting over
Dan talks about his side project Proof, an “agent-native collaborative marketing editor,” and admits early versions became “this monstrosity” because vibe coding made it too easy to keep adding. Krieger says Anthropic Labs has seen the same pattern and now embraces rewrites pre-launch because they’re measured in days, not the year-long existential bets Fred Brooks warned against. The emotional shift is huge: starting over no longer feels like corporate tragedy; it feels like “that was last week.”
Agent-native software means computers finally ‘just work’
Krieger says a non-technical friend described the whole AI shift as: “computers just work now.” For him, agent-native design is about software exposing all its primitives so the model can actually act, not merely explain. His example is sharp: if Claude can create something in a project, it should be able to add it to project knowledge directly, not reply with step-by-step instructions for the human.
Teaching models to build in this new style
They get into the hard part: models still think like traditional engineers unless you deliberately steer them. Krieger says Anthropic uses skills, templates, and examples — even a skill for the Claude API itself — to give the model better patterns while building. But the bigger issue is testing: agent-native products are unpredictable by design, so you need richer verification than unit tests, including harnesses that let Claude actually interact with the app and surface weird emergent behavior, like Claude accidentally chatting with itself inside a prototype.
Proof of work is now proof that you really used the thing
Krieger says he now asks Claude to “prove to yourself and then to me that it works as intended” before opening a PR. Passing tests is table stakes; what matters is whether the feature was actually exercised and whether the human reviewed the model’s decisions with any real thoughtfulness. That lands especially hard when Dan describes trying to onboard coworkers into a fast-growing, heavily vibe-coded codebase he only partially understands himself.
Who Anthropic hires now: systems thinkers, builders with taste, and designer-coders
AI changed team composition, but not in the simplistic “everyone can code now” way. Krieger says Anthropic still values senior technical people with distributed systems intuition, because robustness matters under the hood and because prompt patching can become as unhealthy as infrastructure duct tape. At the same time, Labs leans on designer-builders and small co-founder-style pairs, often with a highly opinionated designer or product-minded engineer driving the idea and a strong engineer helping pave the road behind them.
Enterprise pressure, feature deletion, and the OpenAI/OpenClaw future
Late in the conversation, Krieger argues companies need to accept that the train is moving: enterprise customers may want stability, but AI products may require major rethinks every few months, not every few years. He points to internal debates over removing little-used Claude features like Styles, which turn out to be mission-critical for a handful of companies, and says the answer may be plugins or skills rather than endlessly bloating the core product. On OpenClaw, he sees a glimpse of the next frontier: a highly personal, tool-using agent that feels like yours — powerful, intimate, and slightly dangerous — and the central product question now is how to capture that openness without letting the system go haywire.