The AI Moloch
March 18, 2026

Faster, Busier, Emptier
AI doesn't just make you faster. It resets the pace for everyone.
AI was supposed to make work lighter. Instead, in more and more fields, it's turning speed into obligation. Once enough people can code, draft, summarize, design, and plan with AI, the question stops being "Should I use it?" and becomes "Can I afford not to?" That's the AI version of Moloch.
Game theory is just the study of situations where your best move depends on what everyone else does. Moloch is one of its darkest patterns: each person makes the rational local choice, and the whole system gets worse. Scott Alexander helped popularize the modern internet use of the term in a 2014 essay, borrowing the image from Allen Ginsberg's "Howl." It's a label for coordination failure at scale: nobody likes the setup, but nobody wants to be the only one who opts out. (Slate Star Codex)
AI sharpens this trap because it cuts the cost of output so aggressively. But cheaper output rarely buys leisure on its own. In competitive systems, it usually resets the baseline. What feels like leverage at first becomes expectation, then job requirement, then table stakes. The tool helps you, then the market rebuilds itself around the help. That is why AI can feel like relief and pressure at the same time.
The Cost of Refusing the Machine
Take the careful software developer. They don't reject AI because they worship manual typing. They reject the bargain around it. A model can generate a feature, a refactor, or a scaffold in minutes, but somebody still has to judge whether the abstractions hold, whether the edge cases are covered, or whether the code will still make sense 6 months from now. Meanwhile the competition moves upstream. Other developers are no longer just writing code. They're designing better AI workflows, tighter review loops, and faster ways to steer the model. So the careful developer starts using AI too, not because they fully trust it, but because refusing it starts to feel like professional self-sabotage. They swap typing for supervision, then feel trapped between 2 bad options: move fast and lower the bar, or protect the bar and look slow. They can feel behind even when their judgment is stronger, because the market measures output first and quality later, if ever.
Craft Meets the Output Race
The same logic now hits knowledge work almost everywhere. AI was sold as a productivity tool, and it is one. But productivity inside a competitive system rarely turns into rest. It turns into appetite. If a team can draft strategy documents, summarize calls, answer emails, build decks, and analyze data faster, most organizations don't respond by shrinking the workload. They raise the quota. The worker gets more leverage, but less slack. The old bottleneck disappears, and a new one takes its place: human attention, human judgment, human endurance. So people don't step off the treadmill. They speed it up.
The Tyranny of Leverage
Writing may be the clearest example because the loss shows up on the page. Good writing is slow because thought is slow. You gather facts, test claims, find the real sentence, cut the fake one, and come back later to see what's still true. AI is useful for routine copy, research outlines, and rough first passes. But once speed becomes the norm, the writer faces a bad trade: keep the deliberate process and look inefficient, or lean on the machine and risk sanding off voice, judgment, and originality. The danger isn't just bland prose. It's borrowed thinking. When you outsource too much of the sentence, you often outsource the mind that sentence was supposed to reveal.
Once you see the pattern, it shows up everywhere:
- The tool that saves you 2 hours becomes the reason you're assigned 4 more tasks.
- The developer who cares about quality pays for it in private, because the market rewards speed in public.
- The manager who can summarize every meeting faster just makes room for more meetings.
- The analyst who can make 10 slides before lunch trains the firm to ask for 20 by dinner.
- The recruiter who can screen 1,000 résumés now has to explain why only 200 were screened yesterday.
- The support team that automates replies teaches customers to expect instant answers at 11 p.m.
- The salesperson with AI outreach doesn't build more trust. They just flood more inboxes until trust collapses.
- The founder who knows the demo is brittle still ships it, because caution looks like sleep.
- The student who writes honestly loses to the student who submits polished mush on time.
- The teacher who bans AI becomes a cop. The teacher who allows it becomes a detective.
- The writer who pauses to think looks slow next to the writer who prompts and posts.
- The designer who can generate 100 options spends the afternoon killing 97 bad ones.
- The lawyer who drafts faster doesn't get a lighter caseload. They get tighter deadlines and more review risk.
- The consultant who can synthesize faster doesn't get paid to think deeper. They get paid to answer sooner.
- The researcher who uses AI to scan papers faster now has to scan through AI-written papers too.
- The newsroom that publishes faster turns accuracy into a luxury good.
- The job seeker who refuses AI sends 5 careful applications; the one who uses it sends 200 plausible ones.
- The investor with AI diligence sees more deals and has less time to think about any of them.
- The product manager who can generate 50 ideas in an hour still has to live with the 5 the company ships.
- The company that gives everyone AI assistants may not get better work. It may just get bad work at scale.
Watch where AI shifts from option to obligation. You'll see it in hiring, turnaround times, quotas, publishing schedules, classroom norms, and the quiet spread of expectations that nobody explicitly chose. That's where the trap hardens. Not in the model itself, but in the incentives that grow around it.
The deepest AI question may not be whether the tools get smarter. They will. The harder question is whether we can stop every gain in capability from turning into a new duty. A society can invent better tools and still make daily life worse. If we let every speed gain become the new minimum, the system won't need to coerce us. It will just keep setting the pace. The real AI question is not what the model can do. It's what kind of life its incentives force people to live.