Fooled By The Prompt
March 21, 2026

AI coding becomes gambling when it gives the operator small, frequent wins up front while pushing rare, severe losses into the future.
Skin In the Code
AI coding turns into gambling when small wins arrive fast and large losses arrive late. That is why it feels so good.
Ask an agent to clean up a tangled auth flow. It edits 14 files in 40 seconds. The tests go green. A task that once required an afternoon of reading, tracing, and careful edits now feels almost weightless.
On Monday, an enterprise customer with a strange SSO setup can't log in. The broken path lived in one ugly branch nobody wanted to reread. The model is gone. The maintainer gets the pager.
AI coding makes more sense through Nassim Taleb than through benchmark charts. Taleb keeps returning to the same few questions: who carries downside, how often people mistake luck for skill, and what happens when rare failures finally arrive. Coding agents hit all 3.
The payoff curve matters more than the average output. Agents hand out frequent, visible gains. The losses hide in the tail. They surface later, under stress, and usually land on a human.
The Hidden Asymmetry in AI Coding
A coding agent is a language model that can read, write, and edit code across a repository. That changes the rhythm of programming.
Traditional coding has friction. You reopen old decisions. You trace dependencies. You rebuild context that has decayed since the last time you touched a file. That friction costs time. It also builds judgment.
Agentic coding strips much of that front-end friction away. You ask. It responds. Sometimes the answer is mediocre. Sometimes bizarre. Sometimes shockingly good. That last case is the hook.
Variable rewards create strong habits. Most prompts do not resolve the problem cleanly. A few do, and those few feel magical enough to keep you pulling. Soon the work centers on the next prompt rather than the system itself.
Craft and gambling train different reflexes. Craft turns friction into judgment. Gambling turns friction into another try.
Convex Upside, Hidden Downside
Taleb often points to a basic rule: the person who creates risk should carry the downside.
A model has none. It will not face the rollback, the incident review, the angry customer, or the slow erosion of trust inside a team. The human does.
This asymmetry grows as AI moves from toy apps into systems that touch money, identity, data, or compliance. Those systems fail in uneven ways. Most of the time they look fine. Then one neglected edge case concentrates the cost.
A retry loop duplicates charges. A migration handles 99.8 percent of rows and corrupts the rest. A permission check passes every happy-path test and still leaks one account's data. "Mostly right" sounds reassuring until the wrong 0.2 percent belongs to your highest-value customer.
Mean output quality is a poor metric here. Tail risk decides.
The safest rule is blunt: the same person who uses the model should own the consequences if it breaks. Once authorship and accountability split, the tool starts rewarding speed on one side and fragility on the other.
The Slot Machine in Your Terminal
Browse any social feed and the pattern is familiar. Someone posts a one-shot success: a full app in a weekend, a refactor in an hour, a feature that used to take days. Many of those wins are real. They are also a biased sample.
Taleb wrote Fooled by Randomness about the habit of reading skill into outcomes that contain more luck than people admit. AI coding makes that mistake easy. Fast wins are visible. Missing understanding is not.
The same bias shows up inside teams. A model helps an engineer ship 5 features in a week. Everyone sees the speed. Fewer people see the drift in local knowledge. More code exists. Less of it has a real owner.
That is the craft problem. Good builders do not just produce code. They develop a feel for invariants, failure modes, and ugly corners of a system. That feel comes from contact. It comes from wrestling with the hard middle, not just approving output at the end.
A tool can widen your reach while thinning that contact. That is the trade.
Prompt, Pray, Merge
Taleb's barbell is a useful way to use AI. Keep one side very safe. Put the experimentation on the other side. Avoid the mushy middle.
Coding agents are excellent for cheap optionality. Use them to explore a new framework, scaffold an internal tool, generate tests, draft a scraper, compare API clients, or build a throwaway dashboard to see whether a workflow matters at all. A founder can test 6 ideas in a weekend, kill 5, and keep the learning. That is a good bet. Downside is capped. Upside is open.
Problems start when disposable code stops being treated as disposable. Modern software often begins as an experiment and stays in production by accident. A tool that makes early code easy to generate also makes early mistakes easy to entrench. The prototype gets one customer, then 10, then revenue, and now a branch written for speed has turned into infrastructure.
The Work and the Wager
Taleb often prefers subtraction to addition. He calls that instinct via negativa. The same idea helps here.
Use the model to shrink systems before you let it expand them. Ask it to find dead code, duplicate conditionals, stale flags, brittle tests, and unneeded abstraction. Ask for counterexamples, invariants, rollback plans, and edge cases. Ask what should be deleted and what can fail.
That keeps AI in service of understanding. It also protects the part of programming that matters most. Craft lives in ownership of the logic, the trade-offs, and the failure surface. It lives in knowing why the system works, where it is brittle, and what you are willing to trust.
AI can help with that. It can also train the opposite habit. You stop building a mental model and start sampling for a lucky completion. The work feels faster. The builder gets thinner.
Craft Or Chance
The teams that benefit most from AI coding will manage asymmetry better than they automate keystrokes.
Ownership should sit close to authorship. Code review should spend more time on failure modes than surface neatness. Model output should enter the system as a draft that earns trust through review. Leadership should track reversibility, incident rates, and code ownership alongside tickets closed.
The biggest risk is the productivity theater of partial understanding. AI can make not knowing feel productive. That feeling is powerful. It is also expensive.
The teams that come out ahead will use AI to buy optionality at the edge and keep craft at the center. Without that discipline, the editor starts to look a lot like a casino.