AI in Healthcare - The Real-World Realities w/ Gowtham Chilakapati
TL;DR
Healthcare AI ROI is being framed too narrowly — Gowtham Chilakapati argues executives still evaluate agentic AI like an ERP rollout, when the real upside is productivity amplification that could grow revenue 10x or 20x, not just justify cutting 40% of staff like the Block layoff headline suggested.
The winning enterprise pattern is boring on purpose: human-in-the-loop, narrow scope, then scale — In a large insurance setting, his team avoided flashy claims automation and instead targeted a member-advocate workflow, saved about 10% in one unit, and turned that into millions through repetition rather than a single miracle use case.
Big-company AI adoption is chaotic until someone builds a decision framework — After an initial six months of everyone pitching everything from documentation to generic AI ideas, his org created a five-dimension ROI model that looked beyond headcount to customer retention, employee empowerment, and operational outcomes.
No one truly owns AI outcomes in most enterprises yet — Chilakapati says there is still no clear role that connects product, finance, operations, and engineering accountability for an AI deployment, so his organization handled it as shared responsibility after one project failed badly and forced a reset.
In regulated industries, the real risk is data handling, not model hype — He warns that teams pasting production or patient data into tools like ChatGPT without de-identification are effectively inviting future HIPAA trouble, noting OpenAI does not offer a BAA and that enterprise tenancy is not the same as model-level isolation.
The practical impact of AI shows up when you translate it into business math — His pitch to executives was not “cool LLM demo,” but hard numbers: if AI cuts average handle time by 10% and call-center operations cost roughly 7 cents per second across 60 million calls a year, the savings become impossible to ignore.
The Breakdown
Why AI layoffs are the wrong story
Joe opens with the Block news and the 40% layoff headline, and Gowtham immediately pivots to the executive mindset behind it. His point is simple: agentic AI is an amplifier, so the smart question is how to use the same headcount to grow revenue 10x or 20x, not how to use the tech as cover for workforce cuts.
What actually worked in a large insurance company
When agentic AI hit the hype cycle in 2024, his company invested heavily — he mentions a $1 billion commitment — but his team stayed grounded. Instead of automating high-risk claims adjudication, they focused on a safer human-in-the-loop advocate workflow, found about 10% savings in one business unit, and used that as the repeatable pattern that could scale into millions.
The first six months were chaos
He describes the early phase as everybody wanting AI for everything, including use cases that already had perfectly fine tools like Amazon Transcribe. It took six months just to stop bad ideas, build a multidimensional ROI framework, and force every proposal to show value across several guardrails instead of hand-wavy “AI will help” optimism.
The enterprise AI job nobody actually has
One of the stickiest parts of the conversation is his claim that there is no real owner for AI outcomes today. Product can champion it, engineering can build it, finance can ask for numbers, but nobody naturally owns the full operational result — and after one failed project, his team had to manage AI as a shared-responsibility function with regular leadership cadences instead.
Selling AI inside a big company means showing the math
Chilakapati says projects get approved when someone can sell them with proof of value, not LinkedIn vibes. His example is call-center economics: if average handle time drops 10%, and a regulated healthcare company spends about 7 cents per second across 60 million calls a year, the savings story suddenly becomes very real very fast.
Regulation slows adoption, but data hygiene is the real battleground
Joe pushes on why regulated industries feel behind, and Gowtham says the issue is less the models than the plumbing around them. If your LLM is fed stale data 24 or 48 hours late, it is basically pointless; if you feed it raw patient data, you are worse off, which is why he keeps coming back to de-identification, governance teams, and near-real-time data flow as the real prerequisites.
His blunt warning on ChatGPT, HIPAA, and fake comfort
This is the sharpest caution in the interview: he says enterprise ChatGPT still means the underlying model is learning from what you put in, even if your tenant is isolated from other customers. In his telling, without a BAA and proper de-identification, regulated companies are “walking with HIPAA lawsuits,” and the comforting UI settings around deleting chats or opting out of training are not enough to erase the underlying risk.
AI is redrawing the line between product and engineering
The mood lightens near the end as they talk about using AI personally: Joe has Claude Code building apps while they talk, and Gowtham says the big unlock is that product people can now prototype directly instead of waiting six months for engineering estimates. He describes using personas like architect and business analyst inside ChatGPT to generate epics, features, and stories — compressing what used to be months of enterprise planning into something much faster and much clearer.