Understanding Latent Demand
March 19, 2026

Latent demand is the missing variable in most AI strategies. It explains why a feature that looks modest in a demo can reshape a market, and why a dazzling model demo can still fail to become a business.
The concept: people want an outcome; friction keeps them from acting. AI strips away enough of that friction that the behavior finally becomes worth it. When that happens, demand does not just move from one tool to another. It expands.
Markets rarely show you pure desire. They show desire after price, time, skill, coordination, trust, and habit have already sanded it down. Users learn the shape of those constraints. Then they describe the constrained version as if it were preference.
A founder hears, "We don't need weekly competitive analysis." The real statement is often, "We don't need weekly competitive analysis at today's cost, with today's workflow, and with today's burden of review."
That gap is latent demand.
What Latent Demand Means
Latent demand means demand that already exists in human or business goals but stays hidden because a binding friction keeps action below the threshold of effort. Sometimes the friction is money. Often it is time, scarce expertise, switching cost, legal risk, or the simple fact that no one wants to do the task manually.
New products often grow by serving people whose alternative was no product at all. Sony's first pocket radio sounded bad by the standards of living-room radios, but it gave teenagers something they could not easily get before: private access to music. That was enough to create a new market foothold.
That is why latent demand matters more than feature demand. Feature demand asks what users request inside the current workflow. Latent demand asks what users would do much more of if the workflow became cheap, fast, and good enough to trust.
It is also why interviews mislead on their own. People are good at describing pain. They are weaker at predicting how much more of something they would consume once a threshold breaks. The market often learns that only after the price or effort drops.
The Hidden Demand AI Makes Visible
AI changes the economics of cognition. It lowers the variable cost of drafting, summarizing, coding, classifying, translating, searching, planning, and checking. The cost curve has bent fast. Stanford's 2025 AI Index estimates that the inference cost of a system performing at GPT-3.5 level fell by more than 280x between November 2022 and October 2024. Adoption has moved just as quickly. An NBER study found that by late 2024 nearly 40% of U.S. adults aged 18 to 64 had used generative AI, 23% of employed respondents had used it for work in the prior week, and 9% used it every workday. By July 2025, OpenAI reported 700 million weekly active ChatGPT users on consumer plans, with practical guidance, seeking information, and writing accounting for three-quarters of conversations. (Stanford HAI)
Those figures show where demand first appears. It rarely starts as a bold new category with a fresh budget line. It starts with ordinary work that people used to ration: advice they did not seek, drafts they did not write, analyses they did not run, questions they left unanswered, follow-ups they meant to send and never sent.
AI matters here because many of those tasks were suppressed less by desire than by friction.
Demand Was There All Along
The mechanism is consistent across markets.
First, people value an outcome. A support team wants more tickets handled well. A software team wants more tests, more documentation, and more refactoring. A student wants tailored feedback, not generic feedback. A sales team wants account-specific material, not another template.
Second, some friction makes the outcome uneconomic. The work is too slow, too expensive, too expert-heavy, or too annoying to sustain. So people ration. They only do it for top customers, severe incidents, high-stakes deals, quarter-end reviews, or exam week.
Third, a new tool cuts that friction below a threshold. The threshold is rarely abstract. It has a number or a process behind it: a reply in 10 seconds, 95% field extraction, review in under a minute, integration inside the CRM, an audit trail, or human approval before an external action.
Fourth, behavior expands. The task moves from special case to standard practice.
That expansion tends to happen in 3 steps. Compression comes first. The same task gets faster. Expansion follows. People do more of the task because it is now worth doing. Then comes recomposition. The surrounding workflow changes, and a new product form appears. A faster writing assistant is compression. A system that drafts every low-stakes response across the team is expansion. A tool that routes, drafts, escalates, logs, learns, and closes the loop is recomposition.
Most builders stop at the first stage. The larger businesses usually form in the third.
The Best AI Markets Don't Look Big at First
The early evidence fits this pattern well. In a preregistered experiment with 453 professionals, ChatGPT cut time on mid-level writing tasks by 40% and raised output quality by 18%. In customer support, Brynjolfsson, Li, and Raymond found that a GPT-based assistant raised issues resolved per hour by 14% on average and by 34% for novice and low-skill agents. In a GitHub Copilot trial, developers finished a coding task 55.8% faster. In BCG's field experiment, consultants using AI completed 12.2% more tasks, finished 25.1% faster, and produced work that scored over 40% higher in quality. (PubMed)
What do those results have in common? They all describe tasks people already cared about but under-produced. Firms wanted better support coverage. Professionals wanted faster drafting. Developers wanted less time spent on boilerplate. Consultants wanted quicker first passes on structured analysis. AI lowered the cost enough that more of the desired work got done.
The same BCG study also shows the constraint. On a task chosen to sit outside the model's capability frontier, AI reduced the time consultants spent yet hurt correctness. That result is a clean warning. Demand only releases when cost, quality, and trust cross the threshold together. Lower generation cost by itself is not enough. If the user still has to absorb the error risk, the demand stays bottled up. (Harvard Business School)
The macro picture is slower than the demos imply. Acemoglu's task-based estimates suggest that if current task-level gains diffuse broadly, the rise in total factor productivity over the next decade may land around 0.53% to 0.66%, with bigger gains depending on genuinely new tasks rather than simple automation of existing ones. That sounds modest because it is. Supply shocks create large value only when firms reorganize work around them. (NBER)
What we know already is local and uneven. The open question is where those local gains compound into whole workflows and durable budgets.
Why Friction, Not Preference, Shapes Markets
The first mistake builders make is to treat user requests as the market. Users speak from inside current constraints. They ask for faster note-taking. The buried demand may be broader account coverage, better memory across customer interactions, or more follow-up with the long tail of accounts that reps currently ignore.
The second mistake is to stop at generation. As generation gets cheaper, the bottleneck moves. It often moves to review, routing, permissions, context, or execution. A model can draft a response. The product still has to know when to send it, who can approve it, what prior commitments matter, what system to update, and how to recover when something goes wrong.
The third mistake is to measure time saved instead of work expanded. Time saved is useful, but it is not the strongest signal. The stronger signal is that users widen coverage, increase frequency, or adopt personalization they previously withheld. When a team moves from reviewing 5% of calls to 100% of calls, or from doing monthly market scans to daily ones, demand has become visible.
The fourth mistake is to confuse trial with dependence. Trial is cheap. Real adoption means habits, budgets, and operating procedures change. The hard question after a successful demo is, "What are they doing more of now?"
The Shape of Hidden Demand
Map every opportunity across 5 fields.
- Desired outcome. What do people wish they could do more often, more broadly, or more personally?
- Binding friction. What suppresses that behavior now? Cost, time, expertise, trust, integration, compliance, switching pain, or low-status work that no one wants to own?
- Release threshold. What has to become true before behavior changes? Say it in operational terms. A 30-second answer. Less than 5% extraction error. Review in under 1 minute. Native workflow integration. Auditability. Human override.
- Expansion path. If the threshold is crossed, what grows? Frequency, coverage, personalization, speed, autonomy, or the user base itself?
- Capture surface. Where can a product own the new flow of work? Memory, approvals, collaboration, data feedback, routing, billing, or the system of record?
If one of those fields stays fuzzy, the market is still fuzzy.
Take compliance review. The desired outcome is obvious: more material checked before it ships. The binding friction is scarce specialists and slow triage. The release threshold may be high recall on policy flags, a full audit log, and human review on exceptions. If that threshold is met, the expansion path is clear: every document gets screened, not just the most sensitive ones. The capture surface sits in policy retrieval, workflow, approvals, and case management. The draft model is the easy part.
What Users Want but Don't Do
The best hunting grounds share a few traits. The task is recurring. People already value the outcome. Current usage is visibly rationed. The threshold for release is measurable. There is somewhere durable to own the workflow after adoption.
That is why early AI opportunities keep clustering around support, coding, documentation, research, tutoring, onboarding, finance operations, sales prep, and internal knowledge maintenance. These are all areas where teams already wanted more coverage or more personalization than the old cost structure allowed.
There is a simple field test for this. Listen for sentences like these: "We only do this for top customers." "We batch that once a quarter." "Legal has to review every one." "Someone on the team just knows how to do it." "We document it when something breaks." Those are latent-demand tells. They point to work that matters but still sits below the action threshold.
When Latent Demand Stays Latent
Some markets never break open, even after a technical leap. The usual reason is that the remaining bottleneck is elsewhere.
Sometimes the task is too rare or too low-value. Sometimes the real cost sits in data access or approvals, not generation. Sometimes verification is more expensive than creation. Sometimes the organization cannot absorb extra output. A company may be able to generate 10x more sales material and still lack the reps, channels, or customer attention to use it.
Regulated domains make this sharper. Healthcare, law, finance, and public administration can show intense demand and slow adoption at the same time. The need is real. The release threshold includes provenance, auditability, liability, and institutional permission, not just model accuracy.
AI does not remove friction in one clean motion. It often strips out one layer and exposes the next one.
Latent Demand Is Where the Value Is
Once demand releases, new bottlenecks appear.
The first bottleneck is usually creation. The next is verification. Then ranking, distribution, and decision rights. As raw generation gets cheap, value shifts toward systems that can filter, compare, remember, explain, and act safely. One-shot generation tools will probably keep getting squeezed. Trusted workflow products have more room.
Cheap personalization also raises the floor. Once a student can get tailored feedback, generic feedback feels worse. Once a support team can draft replies for the long tail, leaving those tickets untouched starts to look like neglect. Once a company can run analysis on every account, the old habit of reserving insight for the top 20 accounts becomes harder to defend. Supply changes expectations.
When Friction Drops, Demand Surges
Latent demand is one of the most useful concepts for understanding AI because it forces the right question. Use a harder question: does lower cognitive cost make people do much more of something they already value, and can a product own the new flow of work that follows?
That lens is practical. It tells builders where to look, what to measure, and where not to get fooled. It steers you away from feature theater and toward suppressed behavior. It reminds you that markets do not reveal themselves cleanly when constraints are high. They reveal themselves when someone cuts the cost of action hard enough that old rationing breaks.
We think the biggest AI businesses will keep emerging from that break. Find the work people still ration. Find the friction that keeps them rationing it. Then build the system that makes the higher-volume version of that work normal.