Ep. 206: AI Council Metrics, Motivating Passive Adopters & True Bottleneck in the AI Race
TL;DR
The real enterprise AI bottleneck is ownership, not technology — Rietzer says many companies handed AI to the CIO as a “technology problem” after ChatGPT and GPT-4, when the real challenge was cross-functional literacy and business-unit leadership in marketing, sales, customer success, and ops.
A growing internal AI divide is creating winners and losers inside the same company — In his example of a 100-person team getting AI Academy access, 20–30% may resist AI entirely while a smaller group becomes daily power users, dramatically increasing productivity and making non-adopters vulnerable in performance reviews and eventually job security.
Most companies still get AI strategy wrong because they start before they understand the tech — His blunt answer to what organizations fundamentally miss: “They don’t have one,” because leaders try to build strategy without deep AI literacy around reasoning models, multimodality, no-code development, deep research, and workflow redesign.
The biggest near-term labor risk may be underemployment, not just layoffs — Rietzer says he has warned for two years that AI will eliminate millions of jobs, but his sharper concern now is college grads with degrees in fields like economics and marketing ending up in retail because entry-level knowledge work shrinks faster than companies can redesign roles.
AI will likely augment senior leaders while automating entry-level tactical work — His working view is that executives and experienced managers will use AI as a strategic thought partner, while junior-level tasks get heavily automated, creating a serious open question around how organizations will create entry-level employment at scale.
The practical no-brainer use case is using reasoning models as a strategic thought partner — He calls models like ChatGPT, Gemini, and Claude a “cheat code” for decision-making and problem-solving, and says for high-value work he often runs the same prompt across three to six models, then has the other models critique the strongest output.
The Breakdown
Why Amazon’s AI slowdown may be less “maturity” than messy speed
The episode opens with Amazon reportedly slowing parts of its AI rollout after agent quality issues, and Paul Rietzer’s read is pretty grounded: this is what happens when everyone is moving too fast. He ties it to Meta’s own “agents gone rogue” moment and says the bigger lesson is that frontier companies are learning in public while enterprises are being reminded to experiment responsibly, especially once agents can access files, make decisions, and take actions.
AI-native vs. AI-emergent: the framework he keeps coming back to
Asked whether large enterprises are doomed or advantaged, Rietzer revives his 2023 framework: future companies will be AI-native, AI-emergent, or obsolete. AI-native firms get to start clean — no legacy pricing, systems, or talent baggage — while established firms have brand, customers, capital, and expertise but must move fast enough to overcome inertia; Apple and Adobe come up as examples of how even well-resourced giants can still stumble.
The ownership problem: why AI adoption keeps stalling inside enterprises
When Kathy asks who actually owns adoption or data readiness, Paul says the problem often persists because nobody truly does. Too many companies shoved AI onto IT in 2023, treating it as a pure tech issue instead of a company-wide operating model shift, and he’s especially emphatic that “data readiness” is often used as a reason to delay work when, in his view, 90% of a marketing team’s first-year AI use cases don’t need sensitive data at all.
The AI divide is real — and he’s hearing the hard consequences privately and publicly
This is one of the sharper sections: Paul describes a company rolling out 100 AI learning licenses and seeing three camps emerge — resisters, dabblers, and obsessed power users. His point is that the power users rapidly become much more valuable while some peers stand still, and he says the uncomfortable reality he’s been hearing “behind closed doors” is now becoming public: people who refuse to use AI tools eventually won’t keep those jobs, though he argues companies should give workers a clear runway, training, and expectations before forcing that outcome.
His contrarian take: the job shock is coming, and leaders aren’t ready
Asked for an AI opinion many people still resist, he returns to the idea that millions of jobs will be lost, though he now says people push back less than they did six months ago. What really worries him is underemployment — graduates taking whatever work they can get — and he says after countless conversations with leaders at major companies, he still hasn’t found one he believes is truly prepared for what’s coming.
Why over-automation keeps backfiring, from Klarna to customer chatbots
On whether companies will regret automating too much too fast, he basically says: absolutely, and repeatedly. He points to Klarna’s swing from “AI-first” staffing claims back to hiring humans, OpenAI planning to grow from roughly 4,500 to 8,000 employees, and his own team’s experience turning a chatbot off, then later back on after more diligence; his larger point is that the “human element” will keep snapping back into focus when AI implementations feel efficient on paper but wrong in reality.
Three years from now: senior people doing junior work with AI
This section gets especially animated because he’s clearly working through the idea in real time for a possible MAICON keynote. His hypothesis: experienced leaders will increasingly oversee agent swarms and do a huge amount of formerly junior tactical work themselves — building products in Claude Code, generating launch plans, creating assets — which makes the central future-of-work question not whether AI can do entry-level work, but how companies create entry-level jobs once senior people can.
What leaders should focus on now: literacy, governance, and showing outcomes
The final stretch turns practical. He says the reason most AI strategies fail is simple — leaders lack enough AI literacy to design one well — and his advice is to learn just enough through classes, podcasts, trusted voices, and role-specific use cases to separate signal from noise. From there, his playbook is clear: use governance where data and agents create real risk, show skeptical executives business outcomes before demos, and make adoption personal by solving real pain points — like giving someone their Sunday night back by automating the report nobody wants to write.